π§± Pega Web Components
This is the local development page. Each component below is a standard
<pega-*> custom element powered by React under the
hood.
Persona Hero
Reimagine business outcomes
while breaking nothing.
Only Pega AI is grounded in best practices with governance built in β
so you can continuously design, adapt, and grow your business without
disruption.
Build a Blueprint
Explore the platform
How do you want to innovate?
Hero Carousel (Variant A) (slotted children)
Reimagine business outcomes
while breaking nothing.
Only Pega AI is grounded in best practices with governance built in β
so you can continuously design, adapt, and grow your business without
disruption.
Build a Blueprint
Explore the platform
Hero Carousel (Variant B) (centered layout)
Reimagine business outcomes
while breaking nothing.
Only Pega AI is grounded in best practices with governance built in β
so you can continuously design, adapt, and grow your business without
disruption.
Build a Blueprint
Explore the platform
Customer Logos (slotted children)
Trusted by leading global organizations worldwide
HSBC
HSBC logo
Citi
Citi logo
Allianz
Allianz logo
AEGON
AEGON logo
Cigna
Cigna logo
CVS Health
CVS Health logo
Vodafone
Vodafone logo
Cisco
Cisco logo
Siemens
Siemens logo
Google
Google logo
Stacking Accordion
Digital self-service
One Pega Platform.
Four paths to becoming a modern AI enterprise.
Workflow Automation
Cut process time in half and free teams up for work that matters
most.
76
Decrease in onboarding time
See how HealthFirst did it
Customer Service
Reduce handle time per interaction, at enterprise scale.
80
Orders automated
See how Citi did it
Customer Engagement
Drive measurable lift on every interaction, for every customer,
across every channel.
40
Increase in channel revenue
See how Wells Fargo did it
Legacy Transformation
Go from thousands of lines of legacy code to a cloud-ready
prototype in weeks, not years.
50
Reduction in process lead times
See how Deutche Bahn did it
Digital engagement
This is where ambition meets accountability, at enterprise
scale
Reimagine your business
Reimagine your business
with Pega Blueprintβ’
Anyone can automate a process. Pega drives outcomes β with every
decision grounded in your business rules, your compliance standards,
and real-time context.
Run it predictably
Run it predictably
with the market's most predictable AI agents
Our AI agents don't just act. They each operate with context,
compliance, and human oversight from the start. So you can
orchestrate your agentic workforce end-to-end.
Design for evolution
Design for evolution
with Pega's unique platform architecture
With Pega Platform, AI is unified, predictable, and safe for
mission-critical work. Our architecture is also key to unlocking
scale and continuous evolution for your business.
Connect everything
Use any AI, any cloud, any data
with Pega's model-agnostic, ecosystem-friendly approach
Get the most out of the systems that work for your business. Pega's
flexible platform approach lets you access and connect everything
you have while making room for what's next.
Blueprint
Meet your reinvention engine
Pega Blueprintβ’ brings business and IT together in one AI-powered
workspace, combining proven best practices and governance so your
design-time innovations become confident, production-grade
go-lives.
Build a Blueprint
ROI Calculator V2
AI Cost Estimator
Every token your agent thinks
is a bill you pay.
How much money are you wasting today on AI re-reasoning?
LLM-orchestrated workflows don't just call agents β they reason
through the entire workflow history before every single action. That
context window compounds. So does the cost.
Calculate your exposure
See how it works
How it works
LLM-orchestrated workflows don't
just call agents β they
re-reason everything.
Before every single action, the orchestrator re-reads the entire
workflow history to decide what to do next. That context window
compounds with every step. Most platforms never mention this.
5-20Γ
LLM-orchestrated workflows cost 5 - 20Γ more per run than Pega's
targeted execution.
And the gap compounds with every additional workflow step, as context
windows grow and token usage accelerates.
AI-native orchestration
Re-reason at every step
A master LLM re-reads the full workflow history at every step before
dispatching the next action. By step 20, it's processing everything
from steps 1β19 just to decide what step 20 should be. Token costs
don't grow linearly β they grow quadratically.
With Pega
Deterministic execution,
targeted agents
Pega separates the thinking from the doing. Deterministic
orchestration handles the workflow. Targeted agents handle the
judgment. You only pay LLM prices where LLMs are actually
needed.
AI Cost Reality Check
How much are you
spending today?
Configure your workflow below. Switch to Advanced for granular token
and pricing controls.
Simple estimate
Advanced options
Expert
Start with a scenario β or set your own below
Workflows per month
workflows
Total steps per workflow
steps
AI agent steps
agent steps
Show my annual savings
Workflow configuration
AI agent steps
Total workflow steps
Monthly volume (cases)
Per agent call β Pega targeted step
Input tokens (Pega step)
Output tokens (Pega step)
Per step β LLM orchestration
Context growth / step
Output tokens / step
Token pricing ($ per million tokens)
Input token price
Output token price
Pega platform cost
Price per case ($)
Calculate Savings
Your estimate
Here's what Pega
saves you.
Without Pega β annual spend
All steps sent to Claude Sonnet ($3/$15 per 1M)
With Pega β annual spend
Includes $0.60/case Pega platform cost
Estimated annual savings
Based on 10,000 workflows/month Β· 40 total steps Β· 28 agent
steps
% saved
Talk to an expert
See the data
Estimates use the same underlying model in both Simple and Advanced
modes: Pega cost = (agent steps Γ scoped token cost) + platform fee;
AI-native cost = Ξ£(step i) [2,000 + i Γ context growth per step] input
tokens + output tokens at every step. Default pricing: Claude Sonnet
$3/M input Β· $15/M output. Pega platform fee: $0.60/case. All
assumptions adjustable in Advanced mode. Estimates are
illustrative.
Visual proof
Why AI-native orchestration
costs grow quadratically.
Every step, the AI-native orchestrator re-reads the entire workflow
history to decide what to do next. At step 20, it's processing
everything from steps 1β19. The input token bill doesn't grow linearly
β it grows quadratically. Pega's deterministic engine has no such
overhead.
Cumulative cost per workflow run β step by step
1,000 monthly cases Β· Claude Sonnet pricing Β· costs compound as
context grows
Pega (deterministic + targeted agents)
AI-native (re-reasons every step)
The chart plots cumulative cost in dollars step-by-step through a
single workflow run. Pega's line rises only at agent steps; the
orchestration line rises at every step and accelerates as context
grows.
Step
Pega (targeted agents)
LLM-orchestrated
How we calculated this β full assumptions & methodology
Full model β assumptions, token math & formula methodology
All values reflect your current calculator inputs. Updates
dynamically as you adjust settings above.
Download CSV
Print / Save PDF
Input Assumptions
Calculated Results
Per-run comparison. Multiply by monthly volume for total cost.
Formula methodology
Complete the calculator above and click
"Show my annual savings"
to see your estimate.
Token pricing sourced from provider API documentation, April 2026.
Default pricing reflects Claude Sonnet ($3/M input Β· $15/M output).
AI-native orchestration cost models context accumulation as an
arithmetic series β the cost driver is quadratic with workflow length,
not linear. Pega cost reflects targeted LLM calls only at designated
agent steps, plus optional platform fee. All assumptions are
adjustable. Estimates are illustrative.
This ROI calculator
provides estimates only and is intended to help you explore potential
outcomes based sample information . The calculations rely on
assumptions and averages that may differ significantly from your
actual experience. Results are not a substitute for professional
analysis, and Pega makes no representations or warranties, express or
implied, regarding the accuracy, completeness, or reliability of the
output. Past or estimated costs/performance is not a reliable
indicator of future results.
What counts as a workflow?
One end-to-end
process your AI handles β a customer service case, a loan application,
an onboarding request, a claims review.
Quick estimate:
If your team handles ~200 cases a day, that's roughly
4,000/month.
Total steps end-to-end
Every action in the
workflow counts β data lookups, decisions, status updates, async
waits, and AI agent calls. In an AI-native system, the orchestrator
re-reads the full history at every one of these.
Typical range:
Simple process β 10β20 Β· Enterprise case β 30β60
Steps that need AI judgment
Of your total
workflow steps, how many actually require an LLM β classification,
document analysis, drafting, decision-making? The rest are handled
deterministically by Pega at near-zero token cost.
Quick guide:
Typically 20β40% of total steps Β· A 40-step workflow might have 10β15
genuine AI agent steps
What are AI agent steps? The number of steps inside
your workflow where an LLM is actually called β decisions,
classifications, drafting. Non-AI steps like database lookups or rule
checks don't count.
Tip: If 28 of your 40 workflow
steps involve AI reasoning, set this to 28.
Total steps vs. agent steps The full length of your
workflow end-to-end, including non-AI steps like data retrieval, rule
evaluation, and system calls.
Example: A claims
workflow might have 40 steps total, but only 28 of those call an
LLM.
How many cases per month? The total number of times
this workflow runs in a month across all users or customers.
Quick estimate:
200 cases/day Γ 22 working days β 4,400/month.
Input tokens per Pega agent call The number of
tokens sent to the model for each targeted Pega agent step.
Because Pega locks in context before the run, this stays small and
fixed β typically just the task prompt and relevant data.
Typical range:
500β3,000 tokens per call.
Output tokens per Pega agent call The number of
tokens the model returns for each targeted agent step. Pega's
structured outputs keep this concise β usually a classification, a
short decision, or a structured JSON blob.
Typical range:
100β800 tokens per call.
Why does context grow? In LLM-orchestrated
workflows, the model's conversation history grows with every step β
each prior action, tool result, and response gets appended. This is
what causes costs to compound.
Example: If each
step adds ~2,000 tokens of history, by step 20 you're sending 40,000
tokens just in context.
Output tokens per LLM step How many tokens the LLM
generates in response at each orchestration step. This stays
relatively constant per step, but you pay it every step β unlike Pega
where only agent steps incur output cost.
Typical range:
200β1,500 tokens/step.
What's an input token price? What you pay per
million tokens sent to the model (your prompts, context,
data). Input tokens are always cheaper than output tokens.
Reference prices (Apr 2026):
GPT-4o ~$2.50 Β· Claude Sonnet ~$3.00 Β· Claude Opus ~$15.00 Β· Haiku
~$0.25 β all per 1M tokens.
What's an output token price? What you pay per
million tokens the model generates in response. Output tokens cost
3β5Γ more than input tokens because generating text is computationally
heavier.
Reference prices (Apr 2026): GPT-4o ~$10
Β· Claude Sonnet ~$15 Β· Claude Opus ~$75 Β· Haiku ~$1.25 β all per 1M
tokens.
Pega platform cost per case The Pega platform fee
charged per workflow execution, on top of raw LLM token costs. This
covers orchestration, routing logic, audit trails, and compliance
tooling.
Set to $0 if you want to compare pure LLM
token costs only, without factoring in platform licensing.
Departmental operations
10,000 cases/mo Β· complex case management Β· 40 total steps Β· 28 agent
steps
Enterprise
100,000 cases/mo Β· multi-system orchestration Β· 50 total steps Β· 30
agent steps
Scaled volumes
300,000 cases/mo Β· large-scale automated processing Β· 60 total steps
Β· 35 agent steps
PARAMETER
YOUR VALUE
NOTES
WORKFLOW
TOKEN PRICING
PER AGENT CALL (PEGA TARGETED STEPS)
LLM ORCHESTRATION (AI-NATIVE, PER STEP)
AI agent steps
Total workflow steps
Monthly volume
Input token price
Output token price
Pega platform cost
Input tokens / call
Output tokens / call
Context growth / step
Output tokens / step
Steps where an LLM is actually invoked (Pega only)
All steps end-to-end. AI-native re-reasons at every one
Scales cost linearly for both architectures
Claude Sonnet ~$3 Β· GPT-4o ~$2.50 Β· Opus ~$15
Typically 3-5Γ input price
Metered licensing β deterministic orchestration + audit trail
System prompt + case data scoped to that step only
Structured result, classification, or short decision
Key lever β appended history grows quadratically. Conservative at 2K;
real frameworks often 4K-8K
Chain-of-thought + next-step decision per master agent call
METRIC
PEGA
(DETERMINISTIC + AGENTS)
AI-NATIVE
(RE-REASONS EVERY STEP)
SAVINGS W/ PEGA
Input tokens / run
Output tokens / run
Total tokens / run
Token cost / run
Platform cost / run (Pega)
All-in cost / run
Monthly cost
Annual cost
Cost multiplier
lower cost
Pega cost per run
= (AgentSteps Γ InputTokens / 1M Γ InputPrice)
+
(AgentSteps Γ OutputTokens / 1M Γ OutputPrice)
+
PlatformCostPerCase
Only designated agent steps invoke an LLM. All other steps run
deterministically at near-zero AI cost. Context is scoped to each
individual call.
AI-native orchestration input tokens per run
= Ξ£ (step i = 0 to Nβ1) [ 2,000 + i Γ ContextGrowthPerStep ]
= N
Γ 2,000 + ContextGrowth Γ N Γ (Nβ1) / 2
This is an arithmetic series. The NΓ(Nβ1)/2 term makes cost
grow quadratically β not linearly β with workflow length. A
workflow twice as long costs roughly four times as much to
orchestrate. At step 20, the master agent is re-reading everything
from steps 1β19 just to decide what step 20 should be.
AI-native total cost per run
= (OrchestratorInputTokens / 1M Γ InputPrice)
+
(TotalSteps Γ OutputTokensPerStep / 1M Γ OutputPrice)
+
same targeted-agent costs as Pega
The orchestration layer is pure overhead on top of the agent calls
both architectures share. The gap widens with every step added to the
workflow.
steps
tokens
cases
case