ten×
0
← back to blog
The Four Forces Keeping Your Company Frozen on AI

The Four Forces Keeping Your Company Frozen on AI

2026-03-18·9 min read·aistrategyenterprise

94% of knowledge work is AI-feasible. Your company deploys on a third. The problem isn't the technology. It's four forces nobody's talking about.


Anthropic's 2025 Economic Index is the most rigorous study of AI's actual workplace impact ever published. Not predictions. Not surveys. Real usage data from millions of Claude interactions, matched against 800 US occupations.

The headline: 94% of knowledge work tasks are AI-feasible. The reality: most companies deploy AI on roughly a third. Some categories are worse. Legal sits at 10%. Healthcare practitioners at 5%. Education at 8%.

That's not a technology gap. If the models can handle 94%, the technology is already there. It's an organizational gap. And it stays open because of four forces that have nothing to do with how good the AI is.

[interactive]The Four Forces Keeping Companies Frozen

Tap each force to see what's behind it.

Deployment gap94% feasible, 33% deployed
deployedfrozen by these four forces

Force 1: The design gap

Here's a stat from the Anthropic data that should concern everyone building AI products: 97% of what people use Claude for falls within a narrow band of tasks it already does well. People found a few things the chatbot is good at, and they keep asking for those things.

Give someone a chatbox that can theoretically do unlimited things and they'll use it for three things. Summarize this. Rewrite this. Answer this question. The vast majority of tasks AI could be doing aren't being done because nobody designed the interface to make them discoverable.

We're in the "tiny web page on mobile" phase of AI UX. Remember when mobile web was just desktop pages shrunk to fit a phone screen? That was 2008. It took until 2012-2013 for truly mobile-native paradigms to emerge. Swipe interfaces. Pull-to-refresh. Bottom navigation. Ideas that seem obvious now but didn't exist when the technology first arrived.

AI is stuck in the same phase. The paradigm is "chatbox." It works for text generation. It's terrible for workflow automation, process management, data analysis, and the hundreds of other tasks AI is theoretically capable of handling. The design gap isn't a UI problem. It's a paradigm problem.

Force 2: The trust gap

This one is subtle because it looks like a simple problem with a simple solution. "Users don't trust AI. Show them how it works. Problem solved."

Wrong. The trust problem runs in both directions.

Show too much transparency: "The AI agent searched 47 documents, compared 12 clauses, applied 3 compliance rules, and generated the summary using a chain-of-thought reasoning process involving..." Users tune out. They didn't ask for an audit trail. They asked for a summary. Excessive transparency creates friction and makes the system feel fragile. If you have to explain every step, something feels wrong.

Show too little transparency: "Here's your answer." Users don't trust it. Where did this come from? What data was used? Is this hallucinated? Without any visibility into the process, users treat AI output as unreliable and double-check everything manually. Which defeats the entire purpose.

Trust builds on a ladder. Interaction by interaction. Step one: the AI assists while a human decides. Step two: the AI decides while a human reviews. Step three: the AI decides autonomously on routine cases, flagging only exceptions. Step four: the AI manages other AI agents with human oversight at the system level.

Most organizations are stuck between step one and step two. Not because the AI isn't capable of step three, but because the organizational trust hasn't been built. There's no framework for progressive autonomy. No metrics for when to promote an AI system from "assistant" to "decision-maker." No protocol for handling the inevitable mistake at step three that makes everyone retreat to step one.

Force 3: The process complexity gap

This is the force that kills the most AI projects, and it's the least discussed.

Businesses aren't collections of isolated tasks. They're interconnected processes. A task that's 100% automatable in isolation might be 20% automatable when it sits inside a real business workflow.

Take invoice processing. The AI can read the invoice, extract the amounts, match it to a purchase order, and flag discrepancies. 100% automatable in a demo. In practice:

  • The CFO has an informal rule that invoices over $50K need a phone call to the vendor
  • Three suppliers send invoices in a legacy format that predates PDF
  • The AP team has a workaround for one client that involves splitting invoices across two cost centers
  • Certain line items trigger a compliance review that requires a human sign-off
  • The ERP system has a batch processing window that means anything submitted after 3 PM gets held until tomorrow

None of these exceptions are in the process documentation. They live in the heads of people who've been doing the job for fifteen years. When you automate the documented process, you automate the easy 60%. The remaining 40%, the exceptions, the workarounds, the informal rules, breaks immediately. And that 40% is the part that actually requires judgment.

Force 4: The pricing gap

Even when the technology works and the organization trusts it and the process is mapped, companies freeze at the pricing question. How do you pay for AI when every pricing model is broken?

Per-seat pricing was built for a world where humans use software. When AI agents do the work, you don't need 100 seats. You need 10. That's a 90% revenue hit for the vendor, which means either the vendor raises prices (killing the ROI) or the model collapses.

Consumption-based pricing (pay per API call, per token, per minute) terrifies CFOs. "What's our AI bill going to be next month?" "Depends on how many documents we process." "That's not a budget. That's a guess." Predictability matters to finance teams. Variable costs feel like risk.

Outcome-based pricing (pay for results, not usage) sounds elegant until you realize it creates diminishing returns. The first month, the AI saves 40 hours. You pay for 40 hours of value. The second month, the AI saves 40 hours again, but now the team has already adjusted. The perceived value drops even though the actual value hasn't changed. The customer resents paying the same price for something that feels like table stakes.

Pricing ModelProblemWho gets hurt
Per-seatAI replaces seats, revenue collapsesVendor
ConsumptionUnpredictable costs, CFO can't budgetCustomer
Outcome-basedDiminishing perceived value over timeBoth
HybridComplex, hard to explain, slow to closeSales team

The result: companies that could deploy AI today, with models that work, on processes they understand, don't deploy. Because they can't figure out how to pay for it. The pricing gap isn't a financial problem. It's a decision paralysis problem.

Why this means the window is wider than you think

The "AI moves fast" narrative suggests the opportunity window is 12-18 months. Models improve. Tools mature. Everyone catches up.

That narrative is about technology. The four forces aren't technology problems. They're organizational problems. Design paradigms change at the speed of UX innovation, not model releases. Trust builds at the speed of human experience, not GPU training runs. Process complexity gets mapped at the speed of institutional knowledge transfer. Pricing models evolve at the speed of sales cycles.

These timelines are measured in years, not quarters. The deployment gap that's wide today will still be wide in 2028. It might narrow. It won't close.

What to do about it

If you're an enterprise leader staring at the deployment gap, here's the honest sequence:

  1. Pick one process. Not the most impactful. The most documented. Start where you can actually see what's happening.
  2. Map the real process. Two weeks. Shadow the team. Document every exception. If this step feels unnecessary, you'll learn otherwise during deployment.
  3. Build the trust ladder. Human-in-the-loop first. Always. Autonomous later, earned through demonstrated reliability on real data.
  4. Solve pricing for phase 1 only. Fixed fee. Defined scope. Remove the cost uncertainty entirely for the first engagement.
  5. Measure what matters. Not "AI accuracy." Process efficiency. Time saved. Errors caught. The metrics that connect to business outcomes, not model benchmarks.

The four forces aren't walls. They're friction. Friction is reducible. But only if you know it's there, and most companies are still blaming the model for problems the model didn't cause.

94% feasible. 33% deployed. The gap isn't the technology. It's everything around it.