No System Is 100% Right
No system is 100% right. Not the human process running today. Not the agent that replaces it tomorrow.
That is not a caveat. It is the starting condition for every economic decision about agentic AI. The cost of failure is real, it is variable, and it is asymmetric. A misrouted document costs pennies. A bad compliance decision costs millions. Any economic model that treats these as equivalent has already failed.
The question is not whether to adopt agentic AI. The question is whether the economics are understood well enough to make the decision structurally sound. For most organizations, they are not — because the baseline is wrong.
We recently published a prescriptive guidance document that lays out a practical framework for this economic analysis. What follows draws on that work.
The Baseline Problem
Every automation decision rests on a comparison: the cost of the current process versus the cost of the proposed replacement. If the baseline is wrong, the comparison is meaningless. Most baselines are wrong.
Organizations measure what is visible — headcount, salaries, software licenses — and call it the cost of a process. It is not. It is the cost of the inputs. The cost of the process includes five structural categories, most of which never appear on a budget spreadsheet.
Labor costs extend well beyond base salary. Fully loaded hourly rates — including benefits, workspace, equipment, management overhead, training, and development — are consistently higher than the number organizations use in their calculations. The gap is not marginal. It is structural.
Human performance and consistency costs are where the real distortion lives. Productivity fluctuations, absenteeism, fatigue cycles, procedure inconsistencies, quality control variations. These are not edge cases. They are the operating norm of every human-executed process. They are also invisible in standard cost accounting.
Technology and infrastructure costs — licenses, equipment, support overhead — are at least measurable, even if they are frequently underallocated to specific processes.
Lost business opportunity costs are the category most organizations ignore entirely. Slow response times, follow-up delays, operational bottlenecks that erode customer retention. These costs are real. They are large. They are almost never attributed to the process that generates them.
Risk and defect costs complete the picture. Error rates, rework expenses, insurance, compliance exposure. Rework alone typically costs multiples of the original task.
When all five categories are measured honestly, the true cost of a process is materially higher than what appears in any budget. That delta — between the measured baseline and the structural baseline — is where bad automation decisions originate. Not because the AI economics are wrong, but because the comparison economics were never right.
The Asymmetry of Failure
Cost baselines are necessary but not sufficient. The second structural requirement is a failure model.
No system — human or automated — operates at 100% accuracy. The relevant question is not whether errors occur. It is what each error costs, and whether that cost is tolerable given the volume and velocity of the process.
A human processor handling insurance claims makes errors at a measurable rate. An agent handling the same claims makes errors at a different rate, with a different distribution, at a different cost per error. The economic comparison is not accuracy versus accuracy. It is total cost of error at scale — including detection, correction, downstream impact, and reputational exposure.
This asymmetry determines where agents are structurally appropriate and where they are not. High-volume, low-consequence tasks tolerate higher error rates because the cost per error is small and the volume savings are large. Low-volume, high-consequence tasks tolerate almost no error because a single failure can exceed the total savings from automation.
The failure model is not a risk assessment. It is an economic primitive. Without it, the decision to automate is a guess dressed as analysis.
From Cost Centers to Outcomes
When the baseline is honest and the failure model is explicit, the economic frame shifts.
Traditional departments operate as cost centers. Labor is a line item. When budgets contract, headcount drops. When headcount drops without process redesign, quality degrades. The cost center model optimizes for input reduction, not outcome improvement.
Outcome-based models invert this structure. Costs scale with business value generated. Operational expenses align with revenue. Capacity adjusts to demand. The unit of measurement shifts from hours worked to results delivered.
This is not an aspiration. It is a structural consequence of agents that can execute processes at variable scale with measurable outcomes. But it only works if the economics are grounded — if the baseline reflects the true cost of the current process, and if the failure model reflects the true cost of errors at the proposed scale.
The economic frame is the foundation. Every decision about agent autonomy, governance, and trust rests on it. If the baseline is incomplete, the autonomy model will be miscalibrated — too conservative where the economics justify delegation, too permissive where the failure costs demand constraint.
That calibration — how much autonomy, governed by what framework, trusted on what basis — is a structural problem, not an economic one. Economics tells you where agents are viable. It does not tell you how to make them trustworthy.
Content was rephrased for compliance with licensing restrictions. Source: Prescriptive Guidance - Economics of Agentic AI