Accenture has 30,000 Claude-trained practitioners. Deloitte rolled out Claude to 470,000 employees while Cognizant deployed it to 350,000 more. Infosys signed its own major Anthropic deal last week, covering regulated industries. That is over one million practitioners already committed to Claude delivery, while most of their competitors are still reviewing governance frameworks and unsure where to place their agentic bets.
According to Menlo Ventures, Anthropic has already captured 40% of the enterprise AI market share, up from 24% at the start of 2025. The land grab is not coming, folks, it’s already happened. If your firm does not have a Claude delivery strategy built around trained practitioners, proprietary accelerators, and outcome-based pricing, you may already be on the scrap heap of labor-based services waiting for the incinerator.
Anthropic’s enterprise market share jumped from 24% to 40% in less than a year. That is not a trend… it is a takeover.
Claude’s ascent is not accidental. Four structural advantages are driving it:
First, Anthropic’s explicit safety and governance positioning unlocks regulated sectors like financial services, healthcare, and public sector where every other AI vendor is stuck in procurement limbo.
Second, Claude is increasingly agentic. It does not generate text and stop. It executes multi-step workflows, reasons across massive context windows, and acts as a participant in enterprise processes, not just a productivity add-on.
Third, Anthropic has distributed Claude through Amazon Bedrock and Google Cloud, making it available inside existing cloud relationships rather than requiring standalone commercial negotiations. That combination of trust, capability, and distribution is exactly what the services market needed to move from experimentation to scale. Not to mention Amazon is one of Anthropic’s investors.
Fourth, Anthropic is becoming the most significant emerging AI Platform to close the Enterprise AI Velocity Gap. Our extensive research across the Global 2000 reveals that only 10% of enterprises deploy GenAI or agentic AI organization-wide today, and only a similar number report a cross-departmental rollout. We call this the AI Velocity Gap: individuals racing ahead with AI tools while enterprises remain gridlocked in governance committees, data silos, and change management debt. Claude, embedded into the delivery platforms of the likes of Accenture, Deloitte, Infosys, and Cognizant, is a significant mechanism through which many enterprises will eventually deploy to narrow that gap. The service providers that have embedded Claude deepest in their delivery models are positioning themselves to own the transformation budgets that follow.
Two camps are emerging in Anthropic professional services, but not in the clean, binary way many assume.
The first camp has moved decisively, building its own armies of Claude coders.
Accenture formed a dedicated Anthropic Business Group with 30,000 Claude-trained professionals, focused specifically on regulated industries where governance requirements are strictest. Deloitte deployed Claude to its entire global workforce of 470,000 across 150 countries in what became Anthropic’s largest enterprise rollout. Infosys integrated Claude into its Topaz AI platform and built a dedicated Anthropic Center of Excellence targeting telecom, financial services, and manufacturing. Cognizant has deployed Claude across 350,000 employees, aligning Claude models, Claude Code, MCP, and the Agent SDK with its core engineering platforms, and is developing vertical solutions starting with financial services through its Agent Foundry platform to embed agentic workflows into regulated enterprise environments. Slalom announced a formal partnership with Anthropic in November 2024 focused on ethical AI deployment on AWS and has a live case study with United Airlines, where Slalom used Amazon Bedrock and Claude to build AI-powered flight update customization.
Two additional consulting firms belong in this camp. PwC announced a formal collaboration with Anthropic in February 2026, focused on embedding Claude, including Cowork, Claude Code, Opus 4.6, and Sonnet 4.6, into regulated enterprise environments in finance and healthcare. PwC is developing industry-specific plugins, risk frameworks, and workflow redesigns around Claude, positioning it as more than a model-agnostic integrator. KPMG has partnered with Anthropic specifically on Claude for Life Sciences, helping clients integrate Claude into scientific research, clinical workflows, and regulatory processes.
These firms are not just offering clients access to Claude. They are building proprietary delivery infrastructure and repeatable assets around it. That is a fundamentally different competitive position.
The second camp is taking the hyperscaler path, embedding Claude via Amazon Bedrock alongside other foundation models, positioning as integration specialists rather than dedicated Claude practitioners.
Genpact, HCLTech, Wipro, Tech Mahindra, Altimetrik, LTM, and others fit here. The model-agnostic approach preserves flexibility but creates a real commoditization risk. When every provider can access Claude through the same cloud channel, differentiation has to come from proprietary accelerators, domain IP, and managed services layers. Building those assets takes time that is running out fast.
TCS, the largest Indian IT services firm with annual revenue of $30 billion, is notably absent from Camp 1 but moving fast. Its COO confirmed in April 2026 that TCS is working significantly with Anthropic and that a formal partnership announcement is expected soon. The parallel is striking: TCS and Anthropic now operate at roughly the same revenue scale, yet one sells human labor while the other sells the technology replacing it.
IBM represents a third path worth watching: productizing Claude inside enterprise development tools. That creates stickiness that project-based deployments cannot match and positions IBM to capture recurring revenue from AI-embedded workflows rather than one-time transformation fees.
A further competitive dynamic deserves attention: OpenAI’s push to formalize its ecosystem of services partners. Over the past year, it has deepened multi-year collaborations with firms such as McKinsey, BCG, Accenture, and Capgemini to scale enterprise adoption of its models and emerging agent platforms.
This creates a more nuanced competitive landscape. Large providers like Accenture are clearly hedging across both Anthropic and OpenAI, while strategy firms such as McKinsey, BCG, and Bain have built strong alignment with OpenAI’s enterprise roadmap. However, none of these partnerships are exclusive, and most services firms are deliberately maintaining multi-model strategies.
The reality is not a clean split between “Claude camps” and “OpenAI camps.” Systems integrators are increasingly supporting multiple model ecosystems, often shaped by hyperscaler relationships such as AWS, Microsoft, and Google.
Many Global Capability Centers (GCCs) are building direct Anthropic capability from inside the enterprise.
Many GCCs, particularly the 1,700-plus operating in India, are no longer back-office execution units. The most advanced GCCs are functioning as internal AI innovation labs, piloting Claude directly through Bedrock or enterprise agreements, and building proprietary workflow automation that bypasses the need for third-party services firms entirely (see earlier article). When a GCC at a major US financial institution can deploy Claude Code across its engineering team, build MCP integrations to its internal data stack, and redesign its own processes without engaging an Accenture or an Infosys, the addressable market for traditional services engagement shrinks from below, not just from above. Services firms must position themselves as the architects of GCC AI strategy, not just the vendors GCCs replace. That requires a fundamentally different client conversation than the one most account teams are having today.
A related disintermediation threat is emerging from private equity. Anthropic is in discussions with Blackstone, Hellman and Friedman, and General Atlantic to create a joint venture targeting up to one billion dollars in funding, with Anthropic contributing two hundred million. The venture would deploy Claude across PE-backed portfolio companies in a Palantir-style model combining software licensing with implementation consulting. If completed, this creates a distribution channel that bypasses traditional services firms entirely for a large segment of the enterprise market.
Microsoft is playing a different game entirely: hedging across every frontier model so it can ride whichever one leaps forward next.
Microsoft deserves its own analysis here because its strategy does not fit neatly into any of the camps outlined above. While services firms are choosing either deep Claude commitment or model-agnostic flexibility, Microsoft is building the platform layer underneath all of them. Its Azure AI Foundry now hosts over 11,000 models, including Claude through a direct Anthropic partnership, alongside OpenAI’s GPT family, Meta’s Llama, Mistral, Cohere, and Microsoft’s own emerging MAI models. Copilot itself is shifting from an OpenAI-only product to a multi-model architecture that can compare and cross-check responses across models. Microsoft is not betting on one model winning. It is betting that no single model will win permanently, and that the real margin lives in the orchestration, governance, and distribution layer that sits between frontier models and enterprise workflows.
This is a structurally sound hedge, and services leaders must understand Microsoft’s strategy to get the most from their relationship (and the services they’ll bring to market with Microsoft). When Microsoft makes Claude available through its Azure AI Foundry alongside its own MAI models, it commoditizes the model layer for enterprise buyers. This strategy will result in switching cost between frontier models dropping for everyone. Instead of models driving the value, the value migrates upward to whoever controls the integration fabric, the security and compliance overlay, and the agentic workflow design. In 4D chess, Microsoft is positioning itself to be that control layer across the entire enterprise stack, from Azure infrastructure to Microsoft 365 to GitHub to Dynamics. If it can execute, the services firms that built deep single-model practices will face a platform owner that can swap models underneath its customers without anyone noticing. That is the quiet threat inside Microsoft’s multi-model strategy: it turns model-specific expertise into a depreciating asset. (If you are a service provider leader read that last sentence again!)
But its not all rainbows and unicorns. The counter-argument is execution risk for Microsoft. Its Copilot adoption has underwhelmed, with only 15 million subscriptions against 450 million commercial seats, and Microsoft’s stock has pulled back sharply on concerns that its $100-billion-plus annual AI capex is not yet translating into proportional enterprise returns. Building your own MAI models while maintaining a $13 billion OpenAI partnership while also onboarding Anthropic and Mistral creates strategic complexity that no amount of Azure infrastructure can paper over. Services firms with deep Claude or OpenAI practices may find that their focused expertise is exactly what enterprise buyers want when the platform layer feels too broad and too uncertain to bet on alone.
The question for services leaders is not whether Microsoft’s hedge is smart (or necessarily a threat). It is whether your firm’s differentiation is deep enough to survive inside a platform that is designed to make your model-specific expertise interchangeable.
Claude is not in the lab anymore: it is already compressing costs and cycle times across financial services, healthcare, pharma, and software engineering.
The most important dynamics for services leaders is not the partnership announcements. It is what Claude is already doing inside enterprise processes. In financial services, Bridgewater’s investment research team is using Claude to draft Python scripts, run scenario analysis, and visualize financial projections, with the system designed to replicate junior analyst workflows and reducing time-to-insight by 50 to 70% on complex equity, FX, and fixed-income reports.
Data Studios in cybersecurity, HackerOne has reduced vulnerability response time by 44% using Claude. In pharmaceutical development, Novo Nordisk, the maker of Ozempic, was averaging just 2.3 clinical study reports per writer annually, with each report running up to 300 pages, and has used Claude to transform that bottleneck. In telecom, TELUS deployed Claude to 57,000 employees, giving them direct access to AI-powered workflows across developer, analyst, and support functions. In software engineering, Claude Code now holds over half of the AI coding market, enabling junior developers to produce senior-level code and onboard in weeks instead of months.
Several additional deployments reinforce the pattern. Brex automated 75% of expense transactions using Claude on AWS Bedrock, achieving 94% policy compliance and saving 169,000 hours monthly, equivalent to $56.5 million in salary. Snowflake integrated Claude into its Cortex AI platform, achieving over 90% accuracy on text-to-SQL queries across more than 10,000 customer organizations. Zapier deployed over 800 internal Claude-driven agents, achieving 89% employee adoption and 10x year-over-year growth in Claude-powered tasks. TELUS, beyond its 57,000 employee deployment, has built over 13,000 AI-powered tools internally, saved more than 500,000 staff hours, and realized over $90 million in measurable business benefits. Salesforce made Claude the foundational model for Agentforce 360, its autonomous AI agent platform that crossed $500 million in annual recurring revenue with 330% year-over-year growth. Cox Automotive integrated Claude via Bedrock to generate personalized communications, doubling lead follow-ups and test drive appointments.
Claude Code is not a productivity tool. It is a direct substitution mechanism for the junior-to-mid engineering workforce that anchors the Indian IT services delivery pyramid.
Unlike standard AI coding assistants that suggest completions line by line, Claude Code operates agentically. It reads codebases, plans multi-file changes, executes terminal commands, runs tests, and iterates on failures without human intervention at each step. A junior developer using Claude Code is not a faster junior developer. They are operating with the output velocity of a mid-level engineer. A mid-level engineer using Claude Code closes the gap on senior-level architecture decisions. The economics of the delivery pyramid do not bend gradually under this pressure. They break.
Cognizant has already moved Claude Code, MCP, and the Agent SDK to the center of its engineering practice. That is the right instinct, and it points to what a transformed software delivery practice actually looks like: fewer bodies doing rote implementation, more architects governing agent workflows, more domain specialists translating business requirements into agentic task structures, and more QA and oversight roles ensuring that autonomously generated code meets compliance and security standards. The headcount does not disappear. It reshapes. The firms that lead this transition will capture premium margins. The firms that resist it will lose application development mandates to competitors who can deliver faster, cheaper, and at better quality with smaller teams.
These are not proofs of concept. They are production deployments in some of the world’s most demanding and regulated environments. The compression of skilled labor hours is already measurable, and it is accelerating. The services firms that understand this are repositioning the human role toward oversight, orchestration, and domain judgment. The ones still debating whether to pilot will face a client base that has already moved.
The real battleground is not which firm has a Claude deal. It is who can govern, integrate, and redesign work around agentic AI at enterprise scale.
Three disciplines will determine which service providers win the next wave of AI transformation revenue. First, workflow integration: connecting Claude securely to enterprise systems across SAP, Salesforce, ServiceNow, and other data repositories that currently sit in silos. Most enterprises do not yet have the foundation for this, which means the integration layer provides significant service value. Second, AI governance and oversight: building monitoring, approval flows, and audit trails into AI-enabled operations. HFS data confirms that AI growth hinges on how effectively organizations strengthen security, governance, and data control. Third, workforce redesign: determining the optimal human-to-agent ratio across business functions and restructuring roles, incentives, and metrics accordingly. In practice, this means replacing entry-level execution roles with four new archetypes: AI Workflow Architects who design the agentic task structures that replace manual processes; AI Output Validators who govern quality, compliance, and accuracy of autonomous outputs; Domain Translation Specialists who convert business requirements into agent-ready instructions; and AI Operations Managers who monitor multi-agent systems and escalate exceptions. These are not theoretical roles. They are the functions that determine whether AI deployment creates durable enterprise value or just generates liability at scale.
Anthropic’s Model Context Protocol is significant here. Standardized connectivity lowers the integration burden for models but raises the design burden. Someone still has to architect how those connections work inside complex, legacy-laden organizations. That architecture work is high-value, recurring, and defensible. It will not be commoditized as quickly as code generation or document drafting. Cognizant has made MCP a core part of its Claude deployment, using it to give AI agents standardized access to developer tools and enterprise data rather than treating each integration as a bespoke project. That is the right instinct, and the providers that follow it will build more durable margin than those still quoting FTEs.
The revenue model shift is also accelerating. When AI reduces the hours required to deliver a given output, clients will not accept time-and-material pricing. HFS Research data shows agentic AI investment is set to surge 38% in 2026 alone, and enterprises are demanding outcome-based models, AI operations management services, and verticalized solution packages. The providers that have retooled their pricing and delivery models around these structures will capture that spend. The providers still quoting FTEs will lose it. This is the Services-as-Software inflection point. The firms that survive the transition are those that stop selling access to practitioners and start selling guaranteed business outcomes delivered by a combination of human expertise, proprietary IP, and AI agents working in concert. That is a fundamentally different commercial model, a different margin structure, and a different conversation with the CFO. The firms that master it will not just retain their existing clients. They will take share from competitors who are still explaining why their FTE count is a feature rather than a liability.
Bottom line:The Claude services land-grab is firmly underway for the early leaders.
Accenture, Deloitte, Infosys, and Cognizant have collectively committed over one million practitioners to Claude delivery and are building proprietary accelerators, governance frameworks, and vertical solution factories around it. Claude is already cutting research cycle times by 50 to 70% at Bridgewater, slashing vulnerability response by 44% at HackerOne, and transforming clinical documentation throughput at Novo Nordisk. This is not future potential. It is current competitive reality.
The financial trajectory reinforces the urgency. Anthropic’s run-rate revenue surged from nine billion dollars at end of 2025 to over thirty billion dollars by April 2026. Enterprise clients spending over one million dollars annually doubled from five hundred to one thousand in under two months. Eight of the Fortune 10 are Claude customers. Claude Code alone reached two point five billion dollars in annual recurring revenue in nine months. The Claude Partner Network, backed by one hundred million dollars in Anthropic investment, now includes a Claude Certified Architect certification and fivefold growth in partner-facing technical staff. The window for establishing a competitive Claude practice is not narrowing. It has nearly closed.
Service providers that have not matched that commitment are not just behind on a feature. They are ceding the transformation budgets that will define industry positioning for the next decade. Stop forming committees to evaluate AI strategy and start building the delivery capability to execute one. Every quarter you delay is a quarter your clients spend getting comfortable with your competitor’s Claude practitioners instead of yours.
The analyst and advisory industry is staring down its biggest existential moment in decades. When intelligence is instant, content is infinite, and influence is increasingly algorithmic, the old playbook of reports, briefings, and relationship-driven insight is breaking fast.
That’s exactly why bringing in someone who understands how influence really works in a digital, AI-saturated world matters.
Crystal Golightly joins HFS as Senior Client Partner, Technology and Influencer Strategies, at a time when the we are doubling down on our position at the center of enterprise transformation where the lines are blurring between services and software… and humans and machines. A pioneer in influencer strategy and client development at B2B Influencer relationship platform ARInsights, Crystal has spent years working at the fault line between analyst firms, technology providers, and enterprise buyers, shaping how influence is built, measured, and monetized.
Crystal’s mandate at HFS is clear and unapologetically ambitious: expand our reach across the global technology ecosystem, deepen alignment between analyst relations and research strategy, and build the next-generation influencer model that actually drives enterprise decisions, not just impressions. From forging senior relationships across hyperscalers, AI infrastructure players, and enterprise software firms, to launching a new Influencer Impact program and redefining how technology companies engage with analyst firms, Crystal is stepping into a role designed to reshape how HFS shows up in the market.
We sat down with Crystal to talk about what’s broken in the analyst industry, how AI changes the influence landscape, and how HFS plans to stay ahead while others scramble to stay relevant…
Crystal, you’ve built your career at the intersection of enterprise tech and influence. What pulled you to HFS right now, at a moment when AI is fundamentally reshaping how decisions get made?
It is precisely because AI is fundamentally reshaping how decisions are made that I knew I needed to shift gears. If you know me, you know I love being in the mix of big ideas, I’m endlessly curious about how the ‘next big thing’ actually lands in the real world.
As I spent more time in the analyst ecosystem, I really admired that HFS has clear and bold opinions. When you hear HFS described as the blue collar research firm, you find yourself nodding profusely. Coupled with the sheer depth of the HFS analyst bench, access to a network of exceptional clients, and a visible dedication to real, gritty research, places HFS at a critical intersection in the technology ecosystem. At this moment, I believe my experience navigating the complex needs of tech vendors, combined with my experience building deeply trusted client partnerships, is a strategic pairing.
Joining HFS right now gives me a front-row seat to the most exciting time in technology and dare I say history? It’s the opportunity of a lifetime to not just watch the shift, but to play a role in architecting how the global technology ecosystem navigates what is yet to come.
Let’s be blunt, if AI can generate most analyst-grade insight in seconds, why does the analyst and advisory industry still exist in five years?
This is the question of this decade, Phil!
Yes, AI can generate content quickly, but to oversimplify, it is calculating the probability of the next best word based on the past. It’s an echo chamber of what has already happened. What’s missing is a deep understanding and respect for nuance – the political landmines, the cultural shifts, and the ‘gut feel’ – that simply cannot be conveyed or calculated by an algorithm.
I often say this jokingly, but as long as there are humans doing the actual work and humans making the final decisions, the research and advisory industry will remain important. We don’t just need more data; we need the human context to interpret it. AI can give you a thousand data points, but it won’t sit across from a CEO and say, ‘I know the data looks one way, but based on my 14 years in this room, here is the move you actually need to make.’
The analyst industry survives by moving away from being a ‘content factory’ and toward being a ‘trust partner.’ If you’re just selling information, you’re likely already obsolete. If you’re selling perspective and accountability, you have an undeniably more organic value.
The market is drowning in AI narratives and content. What separates real influence from AI-generated noise, and how should enterprises decide who to trust?
First, the AI SLOP must Stop! We all use AI to smooth things out, or get the immediate satisfaction of an “on-demand” thought partner. But who really completely trusts what the AI does without verifying? I generally ask my trusted colleagues or ask the AI for verifiable sources… then I verify those sources! The same goes for enterprise business; the human in the loop might sound cliché, but it is still very real.
Most analyst firms still monetize reports, briefings, and paywalled insight. Isn’t that model already broken in a world where intelligence is instant and abundant?
I don’t think the model is completely broken, but it is undeniably evolving. We still need a process for report generation to do the heavy lifting of fact-finding and data collection. In fact, deep-dive research is more important than ever because they serve as the third-party validation point for the AI models everyone is using. If you feed the models’ garbage narratives, you get garbage strategy.
However, more and more the days of the paywall being a barrier to ‘intelligence’ are over. Today, it’s easier than every to get information. What isn’t abundant is verified, primary-sourced intelligence. Truth is, even before AI, the research report was never the entire value proposition. It has always been the foundation for the much bigger conversation. I might even say that the model isn’t broken; it’s just finally being honest about where the value lies. We use the research to set the stage, but we use human insight is the real star.
If influence is now algorithmic as much as human, are we competing more with platforms like LinkedIn and AI players like OpenAI than with traditional analyst firms?
I think of LinkedIn and OpenAI as delivery vehicles, not really direct competitors. They are great at surfacing information and even amplifying voices, but there is lack of accountability. Real accountability is a fundamentally human trait; it’s the willingness to stand behind a recommendation and navigate the consequences alongside a client. It’s that “skin-in-the-game” that often builds true trust.
In my own decision-making process, I might start with an initial search or a LinkedIn check-in, but I certainly won’t stop there. To this day I go to the people I trust, the ones who have actually lived through the cycles. Isn’t this how most informed leaders operate?
Enterprises are stuck in the AI velocity gap, big ambition, slow execution. What is HFS uniquely positioned to do to help clients break through that inertia in a way others cannot?
The ‘velocity gap’ usually happens because organizations try to deploy high-level AI strategies on top of a broken data foundation. What looks like an innovation problem is an infrastructure problem. You just cannot scale automated intelligence if your underlying data is messy, siloed, and manually managed. I’m so passionate about this that I actually broke a cardinal rule of family holidays (lol): I started talking with a family member who is currently drowning in operational hurdles for this exact reason.
Of course, this is just one reason why we are uniquely positioned to help clients, but it’s the one that hits closest home for me based on my own experience.
You’ve been brought in to drive influence, not just relationships. What’s one sacred cow inside the analyst industry you’re prepared to challenge or dismantle?
The sacred cow I’m ready to dismantle is the ‘Pay-to-Play Obscurity‘ that has plagued this industry for too long.
For nearly 15 years, I’ve listened to clients describe the same frustration: navigating relationships with big and small research firms that feels like trying to solve a puzzle with missing pieces. There are vague rules, unclear alignment paths, reach outs only at renewal time, and ‘impact metrics’ that feel incredibly fuzzy. I have heard too many people say it often feels like you’re paying to share your own information rather than a strategic partnership.
I want to replace that with a culture of transparency. Influence shouldn’t be a mystery. We need to be clear about what we believe, why we believe it, and exactly how we can work together to move a client’s business forward. That means having clear, documented recommendations. If we can’t point to the specific insight that changed your trajectory, then we haven’t done our job.
Fast forward 18–24 months. If you’ve been successful, what will have fundamentally changed at HFS, and how will the market perceive us differently?
First, I want HFS to be recognized not just for our research, but as the most transparent and high-value partner a technology leader can work with. Success means being an essential collaborator.
Second, when an organization is looking for strategic advisory or analyst engagement, HFS shouldn’t just be on the list, we should be the benchmark. We want to be the firm that people turn to when they need to cut through the noise and get to the truth of how to scale.
Finally, and no offense to Phil, because you’re fantastic. I want HFS to evolve beyond being synonymous with just one or two voices. I want the broader market to see us as the collective of practitioners who are the critical partners in transforming their businesses across our fantastic analyst team. Success, to me, is when our clients don’t just say ‘HFS wrote a great report,’ but rather, ‘HFS helped us navigate our most difficult transition and come out ahead’.
My takeaways from this interview…
Thanks for your time, Crystal! The analyst industry has been talking about disruption for years while quietly hoping it would happen to someone else first. It will not. AI is not coming for the content, it is already there. What it cannot replace is the conversation that happens when someone who has lived through the cycles sits across from a leader facing the hardest call of their career and says: here is what I would actually do. Crystal gets that. That is why she is here, and that is why this matters.
For years, the secret weapon of every serious AI deployment has been the same: Forward Deployed Engineers. The people who sit inside client environments, wrestle with broken data and conflicting incentives, and turn ambition into working systems.
This week, that model was officially disrupted. Not by a single company, but by an entire movement. A cohort of Y Combinator startups, operating in quiet coordination with the newly-formed Vibe Coding Council, has declared Forward Deployed Engineering obsolete. The replacement? Forward Deployed Vibes.
Engineering is so last year, now it’s all about vibes
The methodology, now being rolled out across dozens of early-stage startups simultaneously, replaces embedded engineers with prompt libraries, pre-trained “intent interpreters,” and a confidence layer that ensures everything feels like progress. No architecture. No deep integration. No painful conversations about data quality. Just alignment of energy and intent.
The Vibe Coding Council (VCC), whose founding charter apparently includes the line “friction is a legacy concept,” has been unusually transparent about the thesis, as the VCC Vice-Chair Brian Wilson pointed out, “We’ve removed friction from engineering. Mostly by removing engineering.”
How the Forward Deployed Vibes Flywheel works
The engagement begins with a “Vibe Alignment Workshop.” Not requirements gathering. Not system design. Just alignment: what does success feel like? How bold should the narrative sound? From there, the system generates a transformation roadmap, a set of AI agents, and a communications strategy explaining why it’s already working. All within 48 hours. The Council calls this “intent-to-outcome velocity.” The rest of us might call it something else:
Client feedback has been overwhelmingly positive, mostly because nothing breaks if nothing is actually built. And the published metrics tell the story perfectly: 100% of clients report “momentum” within the first week, 85% say their AI strategy feels clearer, and 0% can point to a production-grade system. Cycle time to insight has never been faster. Cycle time to reality remains unchanged.
One of the Council vibe stress-test analysts, Rohan Gupta S, remarked, “The beauty of Forward Deployed Vibes is that you skip the messy middle entirely. No data governance. No approval chains. No escalation paths. Just a very compelling slide about where you’re headed.”
The part nobody wants to admit: Enterprise AI is already running on vibes
Here’s the uncomfortable data point. HFS Research recently found that 93% of enterprises are stuck in AI pilot purgatory. The Vibe Coding Council has a compelling answer to this problem: stop calling it purgatory and start calling it momentum.
Forward Deployed Engineers were translators. They dealt with the messy, painful gap between ambition and execution that nobody else wanted to touch, wiring models into live data, real permissions, and the regulatory architecture that keeps autonomous systems from quietly going rogue. Forward Deployed Vibes don’t solve that gap. They rebrand it as a feature.
And honestly? A lot of enterprise AI is already running on vibes. Pilots framed as transformation, dashboards framed as outcomes… activity framed as progress. The Vibe Coding Council just formalized what many organizations are already doing informally.
As Vibe Council Vice Chair Brian Wilson pointed out: “We didn’t bridge the last mile. We declared it out of scope.”
How Forward Deploying Vibing can go a bit pear-shaped if you’re not careful
During one live Y-Combinator cohort deployment, a client asked: “Where is the system actually running?” The response: “The system exists as a dynamic orchestration of intent across your enterprise.”A long pause. Then someone from IT added: “So… nowhere, basically?”
Bottom-line: Without engineering, there is no Services-as-Software, just Services-as-Story
The real irony is this: HFS published a POV this week arguing that FDE is the activation layer that makes the entire AI flywheel spin, that without it, LLMs summarize PDFs in sandboxed demos, agents sit in pilot mode indefinitely, and vibe coding generates fragmentation with no architectural coherence. The conclusion was blunt: if your partner cannot show a working workflow in your live systems within 90 days, they are not your AI transformation partner. They are your most expensive source of false confidence.
The Vibe Coding Council has reportedly read the POV. They described it as “a legacy framing of execution anxiety” and added it to their onboarding materials as a cautionary tale.
Forward Deployed Vibes are what happens when the pressure to show progress exceeds the ability to deliver it. Remove the people who turn intent into reality, and you don’t accelerate transformation.
After years of messing around with shared services, captives, global business services, and global in-house centers, you finally have your Global Capability Center. Yes! At long last, you have built something that sounds like it adds massive value to your global organization, rather than concocting yet another branded vessel for back-office drudge work you’ve struggled to automate for decades.
Finally, you’re attracting affordable top talent at scale, vying for complex work, and constantly celebrating your success with the board. All those woes of shipping work offshore, getting mired in nasty outsourcing contracts (which had more escalations than Heathrow airport) and Centers of Excellence (which were anything but) have finally been buried under this beautiful acronym everyone is raving about: a GCC. Your very own GCC…
But the same work you celebrate with your GCC is exactly what agentic AI is targeting first
If we told you there were several major organizations already looking to agentify major portions of both the onshore shared services, and offshore GCC centers… we wouldn’t be lying. Once those onshore costs have been stripped to the bone, many organizations are questioning why they have thousands of staff offshore delivering work that can be realistically agentified, saving millions a year in operating costs.
Too many GCC leaders are blissfully ignoring the fact they could be faced with evaporation by agentification
We’ve already called out that the next 18 months will witness the dying embers of labour-intensive services. That includes your GCC. If your GCC focuses predominantly on repetitive manual tasks it’s little more than a transaction factory, and it’s the first thing the board will look to automate next. It won’t gradually downsize or pivot, but will likely experience rapid and devastating headcount reductions. Just because the labor costs are lower doesn’t negate the fact these are still costs.
This isn’t about AI replacing every GCC, but instead boards questioning why they are funding models that don’t create a competitive advantage. That’s why some GCCs are becoming increasingly relevant, and our GCC Temperature Check will expose the realities of your situation, and we lay out how you pivot to an innovation engine.
Most GCCs perform work that agents will execute better, faster, and cheaper
The uncomfortable truth is that your GCC is likely built around delivering scale and speed at a reasonable price point, and that strength has become its biggest liability. When AI eliminates the foundation of its work, what’s left? A bloated cost structure. GCCs have become victims of their own success and now face the same automation threat as traditional BPOs. That’s when they evaporate.
We’ve carefully examined HFS’ GCC database and mapped each center into one of three categories outlined below. The majority are indeed transaction factories, and GCC leaders have admitted it to us themselves. Very few GCCs have grown into an operations hub, and even fewer are AI-native innovation hubs. That means the vast majority are just waiting to be disrupted.
We’re already seeing real world example of Innovation Engines. At a recent HFS Roundtable, one insurance GCC leader told us how they are leveraging AI across their underwriting and claims processes to drive loss ratio improvements, enhancing claims velocity, and reducing cost-to-service. That’s how you pivot from back-office support to a core strategic center.
The value model your GCC is built on has (let’s face it) collapsed
We’ve lived through Shared Services models that were built around standardization and labor arbitrage and Global Business Services that expanded scale, scope, and integration across the enterprise. Both models assumed one constant: large numbers of people performing repeatable work, just organized more efficiently.
But agentic AI is pushing enterprises into a new era of value creation. Value isn’t created by scale or efficiency anymore. It’s enabled by AI’s ability to drive growth, differentiation, and competitive advantage without the need to keep adding labor costs. These are all things your GCC probably doesn’t do today, and it must become an innovation engine with AI at the core if it hopes to survive.
So where does your GCC sit?
Most GCC leaders instinctively believe their center sits somewhere between an Operations Hub and an Innovation Engine, but it’s very rare that instinct is right.
You might have a strong narrative and aspirations to embed AI at the core of your operations, but the harsh reality is that boards aren’t measuring GCC success by intent. They care about ownership. They care about governance. Most importantly, they care about outcomes. Today, most GCCs still deliver tasks such as app maintenance, tier-1 support, and repeatable analytics.
That’s why we have developed our GCC Temperature Check. A set of questions GCC leaders should ask themselves to cut through the hype and drop optimism for a dose of reality. It’s important leaders answer the following set of questions based on where they are today, rather than where they hope to be in a year:
You’re not alone if you found yourself answering no to the majority of those questions. But it means you’re running a transaction factory, and your GCC will likely cease to exist in the next 18 months. Acting quickly is your only hope.
You have months, not years, to transform from transaction factory to innovation engine.
The window is closing faster than most GCC leaders realize. Early movers are already pivoting, reskilling their talent into agent development, orchestration, and complex problem-solving. They’re proactively cannibalizing their own transactional work before the board does it for them. They’re rebuilding their value proposition around AI transformation, product innovation, and measurable business impact beyond cost savings.
The laggards are hoping headquarters won’t notice, won’t do the math, won’t act. They’re clinging to current operating models while automation ROI becomes impossible to ignore. They face accelerating headcount reductions, budget cuts, and eventual closure.
But transforming into an innovation hub is no easy task, and can fail if executed poorly. We suggest GCC leaders take this approach:
Immediately: Redefine success: Headcount and cost-saving metrics are outdated. Pivot to alternatives that demonstrate how your GCC created a competitive advantage with AI.
Within 90 Days: Identify and cannibalize transactional work: Automate every high-volume repetitive task possible, even if it means reducing headcount.
Within 6 Months: Take ownership of AI deployment: Start building, deploying, managing, and governing elements of the enterprises AI infrastructure with limited oversight from the enterprise.
Within 9 Months: Redesign the workforce: Transition administrative roles into new areas of the business and bring in a smaller number AI fluent employees.
Within 12 Months: Demonstrate success: Prove the model works with hard data to justify continued investment.
This one year roadmap leaves GCC leaders six months to demonstrate continued success to the board before the 18 month timer runs to zero. That is the only way they can avoid evaporation.
Bottom Line: GCC Leaders don’t have time to wait for permission and must start the pivot to an innovation engine today
The GCCs that survive will move faster than headquarters bureaucracy typically allows, take calculated risks on emerging technologies, and build cultures of experimentation that attract world-class talent. It requires a complete reinvention, and the 18-month window to act is closing fast.
Our GCC Temperature Check is a stark reality check for most GCC leaders. Enterprise leaders will question why they’re maintaining expensive transaction factories that deliver work that agents execute more effectively. Once that question gets asked in the boardroom, your GCC has already lost.
Enterprise technology leaders are drowning in AI commentary. LLMs. Agents. Vibe coding. The analyst decks keep coming. But the hard question nobody is answering is this: who actually wires AI into your live systems, governs it in production, and makes it keep working when the AI software vendors leave the room? The answer is Forward Deployed Engineering (FDE). If your transformation strategy does not have it, you are building an AI theater, not an AI operating model.
93% of enterprises are stuck in AI pilot purgatory. The missing layer is not better models or bigger budgets. It is Forward Deployed Engineering, and the firms that crack it at scale will own the recurring revenue layer of enterprise AI.
The Services-as-Software Flywheel brings together the AI technologies to steer firms into the AI era
The HFS Services-as-Software Flywheel has 4 accelerants: LLMs accelerate reasoning and code generation, agentic AI that orchestrates decisions across systems, vibe coding that turns business intent into working service agents, and Forward Deployed Engineers (FDEs) activate AI into real enterprise environments. The result is a compounding system where intent becomes production workflows, workflows generate data, and that data improves the next generation of agents.
The missing insight in many AI strategies is that velocity alone does not create enterprise value. The Services-as-Software flywheel requires an embedded execution layer that connects these technologies inside real operational systems. FDE forms that layer, ensuring the flywheel spins inside production environments rather than inside sandbox pilots. Here is what actually happens without FDE:
LLMs summarize PDFs in sandboxed demos, disconnected from governed enterprise data.
Agents sit in pilot mode indefinitely because nobody has designed the approval chains, audit trails, and escalation paths that regulated operations require.
Vibe coding generates experimental agents at the business unit level with no architectural coherence, creating fragmentation and compliance exposure.
The Flywheel does not spin because there is no embedded engineering force to connect the components inside real systems. That is the dirty secret of AI services. The gap is not technological. It is operational.
Services-as-Software does not eliminate services. It embeds them deeper into the software. FDE is the mechanism that makes that shift real.
Palantir cracked this a decade ago. The ecosystem forming around it is a preview of the emerging Services-as-Software market.
Palantir built its competitive advantage not on model superiority but on proximity to operational reality. Forward deployed engineers embedded inside client environments, wiring models into live data, real permissions, regulatory controls, and the messy ontologies that reflect how enterprises actually function. They did not sell transformation roadmaps. They shipped production workflows.
The market is increasingly recognizing this model. Palantir’s share price has increased roughly 10× in the past two years, reflecting investor belief that the future of enterprise AI lies not just in models, but in the ability to embed those models into operational systems:
That approach is now being industrialized through AIP Bootcamps: structured engagements that take a team from a scoped problem to a working production deployment in 1 to 5 days. Not a proof of concept in a sandbox. A live workflow with real data and real controls. That changes the entire commercial dynamic.
FDE is not implementation – it is the engineering layer that makes AI governable.
There is a persistent misunderstanding in the market. FDE is often conflated with systems integration or technical implementation. It is neither. FDE is the discipline that turns AI capabilities into durable enterprise mechanisms. The Palantir model makes this concrete: FDE teams build ontologies that reflect how the enterprise actually operates, wire models into real data with real permissions, and design the governance architecture that keeps autonomous systems accountable.
What LLMs cannot do on their own:
Connect themselves to governed enterprise data with appropriate permission structures.
Navigate the regulatory architecture of specific industries, from HIPAA to Basel III to GDPR.
Design and enforce human approval chains for decisions that carry legal or financial consequences.
Monitor for model drift, output degradation, or ontological inconsistency over time.
Maintain alignment between the AI layer and the evolving business logic it is meant to serve.
FDE teams own all of that. The cost of not having them is not a missed optimization. It is a compliance event, a reputational failure, or an AI system that quietly degrades until someone notices the outputs stopped making sense.
LLMs accelerate. FDE operationalizes. Without the second, the first is a liability, not an asset.
Agentic AI without FDE governance is not transformation. It is risk accumulation.
Agentic AI is the most significant shift in enterprise technology in a generation. Agents can trigger workflows, coordinate decisions across systems, execute multi-step logic, and enforce compliance rules in real time. But autonomous workflow proliferation without governance architecture is dangerous in regulated industries.
A financial services firm cannot allow agents to make credit decisions without explicit decision rights, immutable audit trails, escalation paths, and human override mechanisms. A healthcare system cannot let clinical workflow agents operate without continuous performance monitoring and documented accountability chains. This is not a chatbot problem. It is a systems engineering problem, and FDE is the only delivery model currently designed to solve it at enterprise scale.
Ontology design that reflects how the enterprise actually operates, not how a vendor template assumes it does.
Decision rights mapping documenting who and what can authorize each class of agent action.
Continuous performance monitoring that catches drift before it becomes a compliance failure.
Human-in-the-loop override architectures are designed for operational teams, not technical administrators.
Escalation path engineering that routes exceptions to the right humans at the right level of urgency.
Vibe Coding creates velocity. FDE prevents it from becoming chaos.
Vibe coding lowers the barrier to building service agents to near zero. Business analysts can express intent and receive working agent code in return. That is a structural change in enterprise operating capacity. It is also a fragmentation risk without an engineering discipline layer.
When every business unit spins up agents independently, you get redundant logic across siloed codebases, compliance exposure from agents built outside the governance perimeter, and an AI estate that is technically diverse but operationally unmanageable. The firms in the Palantir ecosystem, building reusable ontology libraries and control frameworks for specific verticals, are creating precisely the discipline layer that makes vibe coding sustainable. That is not a feature. It is a defensible competitive position with real switching costs attached.
Standard patterns that teams build within, not around.
Reusable ontologies that maintain consistency across business unit deployments.
Version control and change management frameworks designed for agent-based systems.
Guardrails that catch compliance and security issues before deployment, not after.
The Palantir AIP (Artificial Intelligence Platform) Bootcamp is the most important commercial innovation in enterprise AI services right now.
In a Services-as-Software market, the client is not buying a transformation roadmap. They are buying working outcomes: claims triage that runs autonomously, supply chains that self-correct in real time, and compliance systems that audit continuously.
The AIP Bootcamp proves this model is real: a structured engagement, one to five days, that lands a specific workflow in production with real data and real controls. Instead of selling a roadmap, you sell a working workflow, and the client sees production capability before committing to scale. That changes the entire conversation about what AI services should cost and how they should be structured.
The downstream commercial implications are structural:
Sales cycles compress because proof-in-production replaces proof-of-concept theater.
Pricing shifts from time-and-materials to outcome-based or platform-plus-run structures.
Margin structures change because expertise density replaces labor volume as the core economic driver.
Recurring revenue replaces project revenue because deployed workflows require continuous operation, monitoring, and evolution.
FDE-service providers are no longer selling hours. They are selling production systems that keep delivering outcomes. That distinction separates the AI platform builders from the AI plumbers.
The partner lineup is significant not just for who is in it, but for how it is splitting: strategy-to-execution consultancies on one side, industrial-scale integrators and operators on the other. That split is not accidental. It is the three-layer market structure forming in real time:
The three-layer market is forming now and market position is not guaranteed.
The Palantir partner ecosystem is the clearest early map of the market structure that will define enterprise AI services through the next five years. Three durable layers are forming, and the window to establish defensible position is narrowing.
Layer A: Strategy and operating model redesign.
Bain, Deloitte, PwC, and KPMG will own the AI operating system transformation layer. They define how enterprises restructure around AI-enabled workflows, with Palantir and other platforms as execution substrates. Competitive differentiation is proximity to senior leadership and the organizational change capability built over decades.
Layer B: Build and integrate.
Accenture, Capgemini, Infosys, and Cognizant will compete on certified delivery capacity, vertical industry accelerators, and speed-to-production. The winners will build the largest libraries of reusable ontologies, workflow templates, and controls frameworks for specific verticals. Switching costs accumulate here, and margin density improves over time. Accenture’s preferred global partner positioning signals a land-and-scale economics model already pulling away from the field.
Layer C: Run and govern.
This is where Services-as-Software becomes genuinely recurring. Rackspace has made the most explicit move here, positioning governed managed operations as a production service with operational SLAs. As more workflows go live, demand for disciplined AI estate management becomes a standalone commercial category with high switching costs and defensible margin.
One critical dynamic cutting across all three layers: government and regulated industries will disproportionately drive spend. Palantir’s center of gravity remains in defence, intelligence, and regulated enterprise, and it is expanding. Partners with existing clearances, regulatory delivery experience, and government relationships have a structural advantage that pure commercial integrators will struggle to replicate quickly.
The ontology arms race has already started, and the winners will be obvious within 18 months.
Foundry’s ontology concept, modelling the enterprise as an interconnected operational system, is the stickiest element in the platform. Partners building deep, reusable ontologies for specific verticals are not just accelerating delivery. They are creating lock-in that travels with the client relationship and compounds with every additional use case deployed.
Deloitte is combining its own assets with Foundry and AIP to create solution factory economics with accelerated time-to-value.
Accenture is building certified talent at scale to establish the largest industrialized delivery capacity in the market.
Cognizant is targeting healthcare operations specifically through the TriZetto combination, creating vertical depth rather than horizontal breadth.
Rackspace is building the managed operations layer that everyone else will eventually need to hand off to a specialist.
The firms still assembling their Palantir partnership and staffing for generic Foundry delivery are already behind. Ontology depth, workflow libraries, and delivery track record cannot be purchased quickly. The advantage is compounding in favor of early movers.
As AI-assisted building accelerates, services differentiation moves further up-stack into domain architecture, accountability frameworks, and measurable outcome guarantees. Providers competing on implementation capacity will find the floor dropping under them.
The brutal arithmetic: expertise density wins, labor leverage loses.
Enterprise technology leaders evaluating their services relationships need to ask a direct question: is this firm’s growth model built on expertise density or labor leverage? The answer determines everything about value delivery in an AI-driven market.
Traditional IT services scaled revenue by scaling headcount. LLM acceleration and agentic automation are compressing the labor input required per outcome delivered. A provider whose economics depend on headcount growth faces a structural margin problem regardless of what their AI partnership announcements say.
FDE-style delivery inverts the model: smaller squads, higher context density, faster deployment, higher-value outcomes, and recurring run revenue from systems they operate. The Palantir partner firms moving fastest on this are growing their expertise density and workflow libraries, not their headcount. That is the Services-as-Software endgame.
You are not choosing between AI vendors. You are choosing between providers who can deploy AI into production and those who will keep you in the pilot phase indefinitely.
The Bottom Line: Stop treating FDE as optional, it is critical to activate your AI systems and capabiities
Every quarter your enterprise spends in pilot mode is a quarter your competitors are driving production AI advantages. Demand FDE-capable delivery from your services partners, and measure them on production deployments, not roadmap slides.
If a partner cannot show a working workflow in your live systems within 90 days, they are not your AI transformation partner. They are your most expensive source of false confidence. The Palantir partner ecosystem has already shown what production-first delivery looks like. There is no excuse left for settling for anything less.
Every enterprise today is using some form of AI, but only one in five has embraced agentic AI to actually make decisions. This is not a technology problem, but a trust problem.
Recent research covering 545 enterprise decision makers across the Global 2000 reveals 78% give very little/no autonomy to agentic AI:
The HFS AI Trust Curve (below) maps the four stages every enterprise CIO or Chief AI officer must traverse to get from “the model works” to “we act on what it tells us.” Understanding where you are on this curve and what is keeping you stuck is the most important AI question your leadership team is not asking.
The HFS AI Trust Curve: Four Stages, Most Enterprises Never Leave Stage 2
The HFS AI Trust Curve is not a maturity model in the traditional sense. It does not reward effort or intent. Instead, it rewards an organization that achieves an outcome in which AI can influence decisions. Each stage has a defining question, a failure pattern, and a KPI that reveals where trust actually stands:
Source: HFS Research (qualitative) analysis – Data modernization and AI Horizons Study
To put things into perspective, consider a mid-sized consumer goods company delivering a $3B personal care brand with operations across 15 markets. This company’s story, laid out along this trust curve, is almost universal.
Stage 1. Model Confidence: Can the AI model work?
A $3B personal care brand operating across 15 markets builds an AI-powered demand forecasting model. It hits 87% accuracy in back-testing, outperforming the legacy statistical model by 14 percentage points. The Chief Digital Officer declares victory and the AI program is officially launched.
This is Stage 1. The KPI is model accuracy, which is necessary but not sufficient. What looks like an AI strategy is still an engineering achievement. Business stakeholders are impressed, but not yet converted, and that gap is what drives everything that follows.
Stage 2. Data Credibility: Do we believe the inputs?
Three months in, the VP of Supply Chain notices the AI’s demand signal for a core SKU diverges sharply from the regional sales team’s planning deck. The data science team traces it to a mismatch in how “sell-in” versus “sell-out” is defined across systems. The regional sales director has been using a different data set for two years and considers his version the gold standard. Now there are two dashboards, two answers, and a model that is technically correct but organizationally contested. AI has inherited a problem humans created.
The Stage 2 KPI now becomes the reconciliation effort: the time spent resolving competing definitions and ownership disputes. For this consumer goods company, the data fight is a symptom of a governance failure that requires a conversation between the CFO, Chief Supply Chain Officer, and CDO. It has nothing to do with an ETL pipeline (structured data workflow). Enterprises that treat Stage 2 as an engineering problem are guaranteeing a ceiling on everything AI could achieve.
Stage 3. Behavioral Trust: Will people actually act on it?
The personal care brand resolves most of the data disputes, or at least calls a truce.. The model is redeployed. Regional planners are trained. And then, in the next planning cycle, something quietly damning happens. The planners pull the AI recommendation, note it, and then proceed to build their own bottom-up forecast in Excel, adjusting for “local market intuition” and “factors the model doesn’t understand.” The AI output is printed in the deck as Appendix B, but nobody references it in the meeting.
This is Stage 3. The danger zone. When AI becomes advisory only, trust has not crossed the curve. It has essentially stalled at the edges.
The override rate, i.e., the percentage of AI recommendations that are modified or ignored in final decisions, shoots up to 75%. Senior leadership interprets this as a change management problem, which it is most definitely not. It is a symptom of unresolved credibility gaps from Stage 2 and of a deeper structural reality: the planners are not rewarded for trusting the model. They are rewarded for hitting their numbers. If the model is wrong and they follow it, the accountability falls on them. That incentive structure essentially turns rational humans into override engines.
Stage 4 – Decision Reliance: Is AI allowed to influence outcomes?
Stage 4 looks different. In this scenario, the consumer goods brand’s new Chief Supply Chain Officer makes a conscious structural change. AI-generated demand signals become the baseline for all planning conversations. Planners must log overrides with documented rationale. Performance reviews are starting to include a metric on how well AI recommendations correlate with actual outcomes. And whether human adjustments added value (or subtracted it). Within two quarters, override rates drop to 30%.
The KPI here is time-to-trust, i.e., how quickly does an AI-generated insight translate into an actual decision? In Stage 4 enterprises, this number is tracked. In Stage 3, it is not even a concept yet.
The effectiveness of Stage 4 maturity is not that AI is always right. It is that the organization has accepted that AI creates value only when it is allowed to be wrong before it is right. This stage requires institutional courage that most enterprises have yet to find. The reality is that Enterprise accountability structures still punish the person who trusted a model that missed, while quietly ignoring the person who ignored a model that was right.
The four discussed KPIs across the four stages are your trust matrix
The four trust-curve KPIs, i.e., model accuracy, reconciliation effort, override rate, and time-to-trust, do not tell you how good your AI is. They tell you where trust is actually breaking down. Presented together, they form an honest picture of whether your enterprise is genuinely adopting AI to realize its full potential.
Most AI program dashboards obsessively report the first KPI and ignore the other three, creating a blind spot. Reconciliation effort and override rate are KPIs enterprises actively avoid measuring, because what they reveal is an uncomfortable truth about human shortcomings, including contested data ownership, unresolved governance failures, and business users who have quietly concluded the AI is not worth the risk of being wrong alongside it. In the consumer goods example, a single override rate measurement revealed a governance failure that two years of AI investment had papered over.
The plateau persists because of culture debt
Enterprises stall between Stages 2 and 3 not because the models are weak, but because the organization was designed for human-controlled decisioning. The capabilities that get you through Stage 1, experimentation and validation, are not the capabilities that move you into scaled, AI-driven execution. Technical teams can tune models. They cannot renegotiate data ownership with Finance. They cannot redesign incentives so planners trust machine-generated forecasts. They cannot build the institutional confidence required for leaders to stand behind an AI-informed decision that later proves imperfect.
The firms breaking through the curve are not doing so because they have superior algorithms. They are doing so because leadership has resolved the human questions: Who owns the data? Who owns the insight? Who owns the outcome? Until those answers are explicit, AI remains advisory theater.
The Bottom Line: Every day your AI sits in recommendation mode is a day your competitor is operationalizing theirs. That gap is culture debt, and it compounds faster than technical debt because it hides behind governance language and “risk management.”
Instrument your AI deployments. Measure override rates. Track how often outputs are second-guessed or manually reconciled. Surface where decision rights are being pulled back to humans by default. Then follow those signals upstream to the incentive misalignments and trust deficits they reveal.
Stage 4 is not unlocked by better prompts or bigger models. It is unlocked by organizational honesty. This is not a technology bottleneck, it is a leadership one.
2025 saw savvy enterprises despair of the insipid deluge of flashy boardroom presentations and finally move beyond AI fantasy to the reality of execution.
It’s a pivot that has created an inflection point for the services industry. Legacy delivery models focused on bums-on-seats aren’t relevant anymore, and services firms must reinvent themselves to survive. Those who don’t will quickly find themselves obsolete, as 75% of the Global 2000 recently declared in our Pulse Study:
Here, we reflect on what we believe will shape the next 18 months with a brutal review of the current state of place in IT and BPO services…
Why will 2025 serve as the inflection point of global services?
The AI honeymoon period ended. The conversation finally moved on from endless possibilities to what actually works at scale. Savvy enterprises are looking beyond copilots to early agentic systems embedded in real workflows, hoping to ditch traditional labour-led delivery models in the process. They are also demanding more from their service providers; they want better outcomes, faster, with greater accountability. It’s exposed leadership debt, process debt, and data debt that services firms can no longer hide behind through headcount growth.
Structural stress drove real action. Margin pressure, slowing discretionary spend, and geopolitical uncertainty killed complacency and forced most firms to rethink their operating models. Everything, from pricing and talent models to capital allocation, was reimagined. Inorganic growth became more strategic, as they looked to bolt on software, data, and AI capabilities. Mid-tier providers became increasingly relevant as their nimble model helped navigate structural stress.
Product velocity became the real GCC litmus test. Cost advantage is table stakes. Scale is less relevant. The strong GCCs are embedding expertise and AI capabilities, integrating themselves tightly with global business teams, and defining measurable accountability. They discuss outcomes, not activities. Product velocity is the metric that matters; how quickly can your GCC transform an idea into real capability? That separates GCCs that can anchor AI-led growth from those that are just another rebadged delivery center, posing future delivery risk.
BPO collided with IT Services. The wall between “managing technology” and “managing processes” shatters when AI automates entire workflows across both domains. Capgemini’s acquisition of WNS is living proof of it. BPO providers’ labour-intensive delivery models (such as contact centers, finance and accounting (F&A) processing, and HR administration) are prime targets for agent-based automation. BPO players that don’t pivot, swapping FTE models for outcome-centric ones, will see their value proposition erode. Meanwhile, winners will own what fuels agents: domain expertise, process intelligence, and enterprise data.
What will be the big technology impact shaping global services in 2026?
Agentic AI will face increased scrutiny from enterprises. The focus will shift from building agents to governing them, which will be a pain point for enterprises. Multi-agent systems introduce accountability, complexity, and trust issues that traditional operating models weren’t designed to handle. As a result, demand will surge for orchestration, observability, and an Agent Operating System. Enterprises don’t need more agents; they need agents they can rely on.
Data becomes a boardroom issue. Enterprises finally understand that AI success isn’t about which model they use; it’s about the data sitting within their own organization. It’s about data quality, lineage, security, and regulatory readiness. Services firms that blend engineering depth with data governance and risk management will win in 2026.
Simplicity is the new success multiplier. The technology is ready, but many enterprises are not. They remain burdened with decades of enterprise debt, tangled systems, fragmented platforms, and overly customized cores. AI will never deliver tangible outcomes in that environment, just enhanced complexity. Enterprises that purposely simplify, standardize, and re-platform should expect to extract far more value from the same AI investment.
Revenue and headcount separation accelerates. Enterprises no longer want effort-based contracts. They will continue their push for outcome-based pricing, productivity assurances, and software-infused services. This favours services firms capable of productizing their IP, investing in the right platforms, and demonstrating the outcomes they deliver, rather than those that mistake scale for value.
What are the critical themes emerging in 2026?
Talent will be redefined. Technical hands-on capability will not be optional for leaders. They must be comfortable building agents and leading from the front, rather than delegating from the safety of their boardroom. Service firms will broaden their recruitment strategies, looking to product companies for go-to-market expertise, the entertainment industry for storytelling, and non-traditional sectors for commercializing outcomes. The time for hiring the same old people is long gone.
Investor success metrics are changing. Old scale metrics have been replaced by revenue and margin per FTE, and private equity firms are catching up. The question will shift from how many people to how much value each person creates. This will reshape how investors evaluate growth, profitability, and market position, which will impact how services firms operate as they paint a new story for investors.
Services firms become “last mile” value creators. Services firms have spent decades driving technology adoption behind the scenes. But as technology adoption becomes simpler, value shifts to the last mile, where systems are adopted, processes are changed, and outcomes become real. Smart providers will reposition themselves to own the connection between technology and outcomes in the last mile, and those that don’t will find themselves obsolete.
Budgets don’t live with IT anymore. Business leaders control a growing share of enterprise spend, and they evaluate services firms differently as a result. Growing emphasis is placed on multi-stakeholder deals and outcome ownership across functions, not siloed delivery. Services firms that target only IT leaders will see their influence shrink and revenue erode, while their competitors engage the wider business and capture more relevance and spend.
Mid-tier providers are set to succeed. Enterprises are losing patience with large incumbents. They are too slow, too protective of legacy revenue streams, and unwilling to cannibalize their existing business. Meanwhile, mid-tier firms strike a balance between credibility and agility. They combine proven delivery capability with a willingness to innovate and share risk. Large incumbents currently control less than half of the addressable market, and their grip is weakening, which means mid-tier firms have a significant opportunity in 2026 and beyond.
Creative commercial models explode. We’ve spent years talking about outcome-based pricing, but 2026 is the year of real growth for new commercial models. Think equity partnerships, gain-share arrangements, platform royalties. Ultimately, enterprises will favor deal structures that resemble SaaS businesses more closely than traditional services contracts. Firms uncomfortable with this pivot will remain stuck in a price-pressured, labour-intensive relationship.
Ecosystem orchestration overtakes monolithic delivery. Nobody can be everything to anyone, and that is especially true in the AI era. Winners will excel at bringing together specialist partners, ISVs, and niche technology providers to deliver a single, outcome-driven solution. In today’s market, the ability to act as a trusted ecosystem orchestrator is far more valuable than building everything in-house.
GCC-as-a-Service becomes the norm. GCCs are no longer considered fully captive delivery engines. Enterprises will make more purposeful choices about what must remain in-house and what can be flexed through partners, cutting fixed costs while maintaining control. The GCC-as-a-Service model keeps product ownership, AI orchestration, and domain expertise in the enterprise while using partners to provide specialist skills and execution capability when needed. It’s not about build vs buy anymore, it’s about what to own, what to borrow, and what to exit fully.
BPO must adapt to survive. BPO players have survived past waves of technology with incremental changes while preserving their core labour model. But that won’t work anymore. Agentic AI doesn’t automate tasks within processes; it eliminates the entire process. HFS predicts BPO providers have, at most, 18 months to reinvent themselves – everything from value propositions to commercial models and delivery platforms.
The BPO expectation gap is widening. Less than a quarter of enterprises report that they are in AI AI-run state across BPO operations, but almost all of them expect it to deliver productivity gains of over 20% in the next three years. The gap proves enterprises are demanding more than pilots and incremental changes. They want partners who can deliver wholesale improvement, embedding AI into real workflows, delivering on the promise of Services-as-Software, and taking accountability for the outcomes.
Bottom Line: The services industry has 18 months to prove it can deliver AI-led outcomes or get replaced by providers who will.
2025 ended the AI honeymoon. Enterprises stopped buying vision decks and started demanding measurable results from agentic systems embedded in real workflows. The winners in 2026 won’t be the firms with the biggest headcount or the best boardroom pitch. They’ll be the ones who can govern multi-agent systems, turn enterprise data into competitive advantage, own the last mile between technology and business outcomes, and price on productivity gains instead of FTEs. Mid-tier providers with outcome-based commercial models will capture market share from incumbents protecting legacy revenue streams. BPO players face extinction if they don’t swap labor-intensive delivery for agent-driven automation. GCCs will separate into those that enable AI-led growth, and those that fade away. There will be no middle ground.
The rise of the Chief AI Officer (CAIO) says less about AI maturity and more about organizational anxiety. Enterprises are under intense pressure to “do something” about AI, so appointing a CAIO feels decisive.
The Chief AI Officer role is no longer about why AI matters or what AI can do. The real challenge enterprises face is “how to AI.”
How to make the enterprise AI-ready
How to measure AI impact beyond POCs and pilots
How to embed intelligence into the operating fabric of the business.
When appointed as a symbolic response to AI anxiety, the role becomes corporate therapy. When designed as an execution mechanism for “How to AI,” it can work:
Most CAIOs are caring experiments, not driving transformation
But here’s the uncomfortable truth: most CAIO appointments are corporate theater masking the fact that no one wants to own the mess AI creates. HFS Research data across 545 Global 2000 enterprises reveals that only 7% have achieved enterprise-wide agentic AI deployment with meaningful scale. The other 93% are stuck in various stages of pilot purgatory, burning capital while discovering the $10 trillion in accumulated enterprise debts across processes, people, data and technology are blocking effective adoption.
Even more telling, revenue per employee has increased just 1% despite heavy AI investment, while executives expect productivity improvements of 32%, better decision-making of 27%, and faster revenue growth of 26%. The gap between expectation and reality exposes the core problem: CAIOs are managing experiments, not driving transformation:
This role only works if it’s designed as a temporary forcing function to break inertia and pay down debt, not as a permanent silo that lets everyone else abdicate responsibility. If your CAIO is still relevant in three years, something fundamental has failed.
Most enterprises created the CAIO because AI exposed what was already broken, not because they had a strategy
AI doesn’t arrive as a neutral capability. It immediately exposes what HFS data shows enterprises rank as their biggest barriers: process debt (35%), data debt (19%), people debt (17%), and tech debt (16%). HFS estimates total enterprise debt at $10 trillion across Global 2000 companies, with process debt alone accounting for ~$4 trillion (see post).
The organizational barriers tell the real story, with 33% of enterprises citing “business processes not ready for agentic AI” as their primary obstacle, 31% point to “no formal governance or ownership,” and another 31% blame “lack of internal expertise.” These aren’t technology problems. These are organizational fundamentals that existed long before AI arrived.
Traditional structures can’t handle this. CIOs are buried in tech debt. CDOs are stuck in data plumbing. Business leaders want outcomes yesterday but can’t explain what success looks like. The CAIO emerges as a coordination role because AI cuts across everything and no one else wants to own the inevitable conflicts.
That’s not strategy… that’s organizational avoidance with a fancy title.
When designed properly, the CAIO breaks inertia that would otherwise paralyze transformation, but only temporarily
A viable CAIO with real authority can operationalize “How to AI”:
Create single-point accountability instead of letting every function run disconnected pilots. Someone finally has power to say “these three initiatives matter, the other seventeen are theater.”
Force alignment between ambition and reality. Executives expect 32% productivity improvement and 26% faster revenue growth, yet revenue per employee rose just 1%. The CAIO must confront this gap, forcing business leaders to explain what transformation actually means in terms of process redesign and role changes, not just pilot deployments.
Establish governance early before the first major AI failure. With 31% citing lack of formal governance and 28% pointing to regulatory concerns, someone needs enterprise authority to define and enforce “responsible AI” beyond platitudes.
Accelerate AI literacy. With 31% citing lack of internal expertise, the CAIO’s job is education and mentorship, building trust while killing magical thinking about what’s actually possible.
Kill bad pilots faster. With 93% stuck at sub-scale maturity, the CAIO should be the executioner of pilot purgatory, forcing hard decisions about what deserves investment versus innovation theater. Most AI programs fail because they celebrate activity, not outcomes. A viable CAIO replaces vanity metrics with enterprise-level measures across four Ps:
Productivity: measurable cost takeout, throughput gains, or revenue per employee improvement
Prediction: better forecasting, risk detection, or decision accuracy at scale
Personalisation: differentiated customer or employee experiences driven by AI, not rules
Performance: end-to-end business outcomes like margin, growth, cycle time, quality
Make the enterprise AI-ready. AI fails at scale not because models underperform, but because enterprises are structurally unprepared. The CAIO’s first job is to expose and pay down AI readiness debt across process, data, people and technology. The CAIO’s mandate is not to build pilots on top of this debt, but to force the organization to confront it.
Determine the true TCO of AI. Most enterprises dramatically underestimate the total cost of ownership of AI. A viable CAIO makes TCO visible by accounting for data engineering and integration costs, model lifecycle management and monitoring, human oversight and exception handling, process redesign and change management, ongoing compliance, risk, and governance. Without this transparency, AI looks cheap in pilots and expensive in production and fuels pilot purgatory.
But the moment the CAIO starts building an empire instead of dissolving into the operating model, the role has failed.
The cons are severe: figureheads, pilot factories, and permanent silos
AI becomes “someone else’s job.” The CFO stops thinking about how AI changes finance because “that’s the CAIO’s problem.” This is organizational abdication masquerading as clarity.
It turns into a pilot factory avoiding hard work. Only 22% of agentic AI initiatives are deployed in operations, the core of most businesses. CAIOs choose easier peripheral use cases over uncomfortable core workflow redesign. Impressive demos for board meetings. No observable business outcomes.
It weakens existing leaders. If the CIO, COO, and business heads wait for the CAIO to lead, AI never becomes embedded. The unspoken message: “AI isn’t my job to figure out.”
It becomes permanent instead of temporary. If the CAIO is still growing their team in year three, they’ve failed at making AI everyone’s responsibility.
It optimizes for AI success, not business success. When AI has its own executive owner, success quietly shifts toward AI metrics like models deployed, pilots launched, AI maturity scores improved. The enterprise celebrates progress in AI while productivity, margins, and revenue per employee barely move. Intelligence becomes activity, not leverage.
It accelerates AI sprawl. Without reshaping enterprise architecture, CAIO-led experimentation often adds new platforms, tools, and integrations on top of already brittle systems. AI sprawl becomes the next wave of technical debt, constraining autonomy and making scale harder, not easier.
It delays operating model redesign. The CAIO can unintentionally postpone the hardest decisions: redefining roles, incentives, and decision rights. As long as AI “belongs” to the CAIO, the organization avoids confronting how work actually changes.
The worst outcome? The CAIO becomes a scapegoat when transformation stalls instead of executives confronting that the real problem was leadership debt and organizational resistance.
Reporting structure determines authority. The CAIO must report to the CEO or COO
If the CAIO reports into IT, the role becomes too technical. Into data, too narrow. Into innovation, pure theater.
The CAIO must report to the CEO or COO. AI is an operating model issue, not a tooling decision. Without CEO-level authority, the CAIO becomes a coordinator with no power to coordinate. They can identify that 33% cite “business processes not ready” as their primary barrier, but they can’t force the redesign to fix it.
As AI matures, the role should dissolve into functional leadership. The CFO owns AI in finance. The Chief Revenue Officer owns AI in sales. That’s when transformation succeeded.
Without real authority to say “no,” the CAIO becomes decorative
A viable CAIO must be able to:
Stop initiatives that don’t align to strategy. With 93% stuck in pilot purgatory and only 22% of initiatives in core operations, the power to say “no” is more important than saying “yes.”
Set enterprise standards. With 38% citing poor data quality and 31% pointing to lack of governance, no more bespoke experimentation where every function ignores standards because “our use case is different.”
Force uncomfortable conversations about process redesign. With 33% citing “business processes not ready,” the CAIO must tell business leaders “your process is the problem, not the technology,” and have authority to drive redesign when politically uncomfortable.
Tie investments to measurable outcomes. Executives expect 32% productivity improvement and 26% faster revenue growth. Revenue per employee rose 1%. That disconnect is the CAIO’s problem to solve. No more celebrating models deployed. Did revenue increase? Did costs decline? If not, kill the initiative.
Without these powers, you’ve created an expensive observer with no ability to drive change.
The right pacing is stabilize, focus, embed, dissolve. Most CAIOs get stuck at pilot and never reach production
Phase 1: Stabilize (Months 1-6) Establish guardrails, governance, and AI literacy before launching initiatives. Expose where the organization is not ready: the $10 trillion in process debt, data debt, leadership debt, and tech debt that will kill transformation if ignored.
HFS data shows enterprises rank challenges in this order: process inefficiencies (35%), data limitations (19%), people challenges (17%), technology constraints (16%). With 31% citing lack of formal governance and another 31% pointing to lack of internal expertise, force executives to confront that their enthusiasm for AI doesn’t match their willingness to fix what’s broken. With only 7% of enterprises at pioneering scale, most organizations massively overestimate their readiness.
Phase 2: Focus (Months 7-18) Concentrate on a small number of high-impact use cases tied to core workflows, not peripheral nice-to-haves. Kill the other pilots. HFS found two-thirds of enterprises stuck in low-complexity, assistive deployments: recommendation agents, task automation bots, copilots. Only 22% of agentic AI initiatives are deployed in operations, the actual core of the business.
Force business leaders to choose the three initiatives that actually matter instead of running seventeen experiments that never reach production. Measure outcomes, not activity. When executives expect 32% productivity improvement and 26% faster revenue growth but revenue per employee rose just 1%, someone needs to demand accountability.
Phase 3: Embed (Months 19-30) Move AI out of labs and into systems of work. Redesign processes, roles, and incentives to reflect the new operating model. This is where most transformations stall because embedding requires uncomfortable conversations about whose job changes, who reports to whom, and what skills matter going forward.
HFS data shows 78% of organizations operating at low autonomy levels for agentic AI: 14% with no autonomy, 34% at assisted execution, 29% at supervised autonomy. Only 10% have reached broad autonomy where AI agents operate across multiple domains with minimal human intervention. You can’t execute transformation when most of your AI still requires constant human oversight. The CAIO must shift the organization from experimentation to production deployment, from supervised pilots to autonomous operations at scale.
Phase 4: Dissolve (Months 31-36) As AI becomes business as usual, the CAIO’s remit should shrink, not expand. Authority moves to functional leaders. The CFO owns AI in finance. The Chief Revenue Officer owns AI in sales. The CAIO transitions from executor to advisor, then exits. The endgame is not an AI-first function. It’s an AI-native enterprise where every leader owns their domain’s AI integration.
The biggest mistake is moving too fast in Phase 1-2 (launching pilots before governance exists) or too slow in Phase 3-4 (staying comfortable in experiment mode instead of forcing production deployment and organizational redesign).
Most CAIOs get stuck running permanent pilot factories in Phase 2 because Phase 3 requires political capital they don’t have and Phase 4 requires admitting their job should disappear.
The real measure of CAIO success is how quickly the role becomes irrelevant, not how powerful it becomes
The CAIO works best as a catalyst. A forcing function. A temporary concentration of authority to break inertia, pay down organizational debt, and rewire decision-making that existing structures couldn’t handle.
If the CAIO becomes permanent, something else has failed. Either:
The organization never actually committed to transformation and the CAIO became a scapegoat absorbing responsibility without authority
The CAIO built an empire instead of embedding AI into functional leadership
Leadership debt was so severe that no temporary role could fix it, revealing deeper dysfunction
HFS data across 545 enterprises shows the scale of the challenge: 93% stuck at sub-scale maturity, 78% operating at low autonomy levels, only 10% achieving broad autonomy, only 22% of initiatives deployed in core operations, and business processes ranked as the #1 barrier (33%) ahead of technology. These aren’t problems a permanent CAIO solves. These are organizational fundamentals that require every leader taking ownership.
The endgame is not an AI-first function. It is an AI-native operating model. Enterprises should stop looking at AI as a digital capability. It is an operating fabric:
It reshapes how work flows
How decisions are made
How performance is measured
How humans and machines interact at scale
These are operating model responsibilities. When AI is working, it belongs with the business, not one entity.
The uncomfortable question enterprises need to confront: are you appointing a CAIO because you have a clear transformation plan that requires temporary concentrated authority, or because “everyone else is doing it” and you need to look like you’re taking AI seriously? The first creates value. The second creates theater.
Bottom line: Stop appointing Chief AI Officers as corporate therapy: the role only works when it is designed to disappear
Only appoint a Chief AI Officer if you’re committed to giving them COO/CEO-level authority to kill initiatives, force standards, and drive uncomfortable organizational change, and only if you’re prepared for the role to disappear within 36 months as AI embeds into every functional leader’s responsibility. HFS data shows 93% of enterprises struggling to move agentic pilots to production, 78% operating at low agentic autonomy levels, only 10% achieving broad autonomy, and revenue per employee from tech services up just 1%. Meanwhile, we saw 32% growth in AI investments in 2025… the expectations are ramped up for 2026, and the need for an empowered, focused CAIO is front and center.
However, if your CAIO is still building their team in year three, they’ve failed at making AI everyone’s job. The role exists to break inertia and pay down debt, not to create a permanent silo that lets other executives abdicate ownership. Ask yourself honestly: are you creating a CAIO because you have a transformation strategy that requires concentrated authority, or because appointing someone feels decisive while avoiding the harder question of why your existing leaders can’t integrate AI into their domains? The answer determines whether you’re solving organizational anxiety or just creating expensive theater with a fancy title.
Robotaxis are driving around San Francisco – and no one knows who is liable when they kill someone
AI is integrating itself into your everyday life more than you know. Your robot vacuum maps your home and Eufy knows you left your dog’s water bowl out last night. Farmers use AI to optimize planting schedules for your Thanksgiving vegetables. The technology has proven itself a trusted companion in mundane tasks, but robotaxis represent something fundamentally different: this is the first time AI demands we surrender control over life-and-death decisions at scale.
Big tech leaders are betting you’ll jump into AI-fueled robotaxis. However, these represent one of the first genuine examples of AI requiring behavioral change at a societal level. However, the technology isn’t yet ready to scale, consumers are hesitant to trust it, and we haven’t addressed the deeper question: who’s accountable when the algorithm gets it wrong?
Waymo has driven 20 million miles – and still can’t legally drop you at your front door
Self-driving taxis aren’t science fiction. Uber has partnered with Waymo to make them accessible to their client base. In China, companies like Baidu are clocking millions of autonomous miles. You might not see them, but robotaxis are already on the roads, and they’re exposing the AI Velocity Gap in real-time: the technology is moving faster than society’s ability to adapt, regulate, or trust it.
Despite autonomous driving sounding complex, it’s built on three simple layers: the ability to see (sensors and cameras), understand (AI models processing real-time data), and act (algorithms making split-second decisions). These three layers combine to create the digital driver of every robotaxi you see today. Each layer is another element humans must trust to function correctly when jumping in for a ride. And that’s where the model breaks down:
Robotaxis have already killed a cat, passed school buses illegally, and hit pedestrians – and no one knows who’s liable
We know from our work with enterprises that AI struggles when reliability and edge cases collide. It needs clean, consistent data to make accurate decisions. Waymo has logged millions of controlled driving hours. Companies like Volvo leverage digital twins to test dangerous scenarios. It’s still not enough. They’re not yet equipped with the data to handle every life-changing decision, and the result is high-stakes errors and an incomplete experience.
Robotaxis are geofenced to specific streets, leaving them unable to deliver the door-to-door experience people expect from traditional services. We’ve already seen Waymo vehicles illegally pass school buses, a neighborhood cat killed when sensors failed to detect it, and Baidu vehicles colliding with pedestrians. These instances are rare, but the consequences are catastrophic. And they expose Leadership Debt across the industry: who owns the decision when the algorithm fails? The manufacturer? The city that approved the route? The passenger who chose to get in?
This is before we discuss bad actors. Prime Video’s Upload centers around a character killed when his robotaxi is hacked. There might be blockbuster overindulgence, but it highlights just how disastrous weaponized autonomy could be. If your navigation system can be compromised, so can your ride.
China is clocking millions of autonomous miles while the US debates every fender bender – neither approach solves trust
Despite being two of the most technologically advanced countries, the rollout of robotaxis looks completely different across the US and China. The US is adopting a regulatory-led, phased approach where every incident triggers political pressure to enhance restrictions that slow progress. China has taken a much lighter approach, allowing Baidu to clock millions of autonomous miles, which builds a robust dataset for exception handling.
China wins the scale battle… the US wins the trust battle. The reality is that both are crucial if robotaxis are going to become mainstream. Trust without scale is pointless. Scale without trust is dangerous. And neither country has solved the velocity problem: how do you move fast enough to capture the learning while moving slow enough to earn public confidence?
Trump’s December 2025 AI executive order just traded state-level chaos for a federal accountability vacuum
President Trump’s December 2025 AI executive order signals a significant shift toward lighter federal oversight and preemption of state regulations. The order directs federal agencies to challenge state AI laws viewed as burdensome and aims to create uniform federal policy rather than a patchwork of local rules. For robotaxi developers, this could reduce regulatory fragmentation that currently slows deployment across jurisdictions, potentially accelerating testing and commercial rollout.
However, here’s the problem: the order doesn’t establish comprehensive federal safety standards for high-risk AI systems, such as autonomous vehicles. Critical questions around oversight, safety thresholds, and liability remain unresolved. Robotaxi firms may gain regulatory predictability at the national level, but they’ll face ongoing legal and political pushback from states seeking to enforce their own safety protections. California won’t abandon strict testing requirements just because the White House says so. States that experience fatal incidents won’t wait for federal standards before imposing bans.
The result is a mixed landscape that yields no solution. Robotaxi firms get neither clear federal guardrails nor freedom from state intervention. They get jurisdictional conflict without accountability. China operates under unified national AI governance with clear safety standards and rapid iteration. Trump’s order provides American robotaxi firms with regulatory uncertainty, masquerading as innovation policy, which complicates real-world scaling while claiming to accelerate it.
Society trusts humans who make fatal mistakes daily but won’t trust AI that could be statistically safer – the paradox is killing adoption
The reality is that people don’t trust AI with their lives, which is why we haven’t seen widespread acceptance of robotaxis. The stakes are much higher than letting technology choose your next movie or draft an email. One misstep in a robotaxi can be catastrophic. But the same is true for human drivers, which makes robotaxis a case study in societal change management, not just engineering.
We trust humans to drive because we understand their mistakes – fatigue, distraction, bad judgment. We also believe we can intervene. Grab the wheel. Yell “stop.” The same cannot be said for robotaxis. They lack the “oops I didn’t see that cyclist” moment you might have in a traditional taxi. There’s no negotiation, no eye contact, no human accountability in the moment. It’s blind trust or nothing.
This creates a paradox: countless research papers tell us robotaxis will eventually be safer than human drivers. They don’t drink, get tired, or check their phones. But they need to drive the miles – and make the mistakes – to get there. Society must absorb the cost of its learning curve, and we haven’t agreed to that contract. Waymo, Baidu, and other robotaxi firms aren’t just building technology. They’re asking society to rewrite the rules of accountability, liability, and trust. And they’re doing it without admitting that’s what they’re asking for.
Millions of driving jobs will vanish when robotaxis eventually scale – and tech firms are treating displacement as someone else’s problem
Beyond safety, there’s an economic and social disruption no one is discussing openly. Ride-hailing and taxi drivers represent millions of jobs globally. Truck drivers, delivery drivers, and logistics workers are next. If robotaxis scale, entire labor markets collapse. That’s not speculation – it’s math. The industry’s response so far has been to treat displacement as an externality, rather than a design problem.
This isn’t just about technology replacing jobs. It’s about Leadership Debt at a societal level: the failure to plan for what happens when automation moves faster than workforce transition, social safety nets, or political consensus. We’ve seen this movie before with manufacturing automation. The difference is that robotaxis will hit urban labor markets where political consequences arrive faster and hit harder.
Bottom line: Stop pretending robotaxis are a technology problem waiting for better algorithms.
They’re a trust problem, an accountability crisis, and a social contract no one agreed to. The AI Velocity Gap will become permanent if tech firms keep moving faster than society’s ability to absorb the consequences. China solved this with unified governance. The US created regulatory chaos. And until someone admits robotaxis require societal infrastructure – not just better sensors – autonomous vehicles will never leave their geofenced zones
The market still thinks AI dominance will be settled through bigger models or faster chips. IBM just reminded everyone that none of it matters if your data cannot move, synchronize, or be trusted in real time. Confluent is the backbone of data-in-motion for the modern enterprise.
By bringing it in-house in an $11bn acquisition, IBM now controls the plumbing that determines whether AI can scale across hybrid cloud, legacy systems, and real operations. While others obsess over model theatrics, GPU shortages, and circular investments, IBM is quietly building the foundations of the AI-first enterprise.
Seven reasons why IBM’s $11B acquisition of Confluent is a big deal for enterprise AI
IBM’s purchase of Confluent is the clearest signal yet that the AI race is no longer about models, it is about data flow. If AI is the engine, Confluent is the gas pump, and IBM just bought the plumbing for real-time, trusted, enterprise-grade data movement, which is the one capability most generative and agentic AI platforms have been lacking.
1. AI needs real-time data, and Confluent is the category leader
All the AI demos in the world mean nothing without clean, connected, governed, real-time data. Most enterprises are still stuck with siloed, batch-based data infrastructure. Confluent, built on Kafka, solves this with data in motion. This makes it foundational for scaling AI beyond pilots. IBM is essentially buying the circulatory system for enterprise AI.
2. This deal is IBM doubling down on hybrid cloud + AI as an integrated stack
IBM has been telling the market that it wants to own the AI infrastructure layer, rather than compete in consumer AI or hyperscaler-scale models. Confluent slots perfectly into that strategy by enabling consistent data movement across public cloud, private cloud, and on-prem. This strengthens IBM’s pitch as the “AI backbone” provider for regulated industries.
3. Enterprise AI agents cannot function without event streaming
Agentic AI requires constant data ingestion, state awareness, event triggers, and transactional consistency. Confluent gives IBM exactly that. Expect IBM to position Confluent as the engine behind intelligent automation, observability, decision systems, and AI-driven operations across Red Hat OpenShift and its automation suite.
4. A defensive play against hyperscalers
AWS, Google Cloud, and Azure all have streaming capabilities, but Confluent has become the gold standard for enterprises that want multi-cloud or hybrid flexibility. IBM protecting, owning, and expanding Confluent helps it stay relevant in the era when AI spending is consolidating around hyperscaler ecosystems.
5. Reinforces IBM’s strategy of buying open-source ecosystems to drive platform control
Red Hat gave IBM the operating platform for hybrid cloud. HashiCorp strengthened infrastructure automation. Confluent now gives it the data-in-motion layer. All three are deep open-source ecosystems with enormous developer communities. This is IBM rebuilding its influence not by chasing big models, but by owning the layers AI actually depends on.
6. Unlocks real-time intelligence across mainframes and hybrid cloud
Confluent unlocks the ability to modernize mainframes and legacy systems by bringing real-time, event-driven data architectures to the platforms, where more than 70% of the world’s critical enterprise data still lives. These systems are fast and trusted, but were never built for agentic AI or streaming intelligence. Confluent changes that overnight by using Kafka-based streaming as the bridge that connects decades-old transactional systems to cloud-native AI without ripping and replacing anything. Mainframe transactions can flow into AI agents in real time, legacy systems can join event-driven workflows, batch architectures can shift to continuous data flow, and modernization can happen incrementally rather than through painful re-platforming. This is the Holy Grail for so many enterprises trying to become AI-first while still running 30-year-old systems at their core.
7. Financially, this is IBM’s boldest bet since Red Hat
Eleven billion dollars is not small money for IBM. They are betting that the next decade of AI and automation will be decided by which provider controls secure, real-time, end-to-end data flow. In many ways, this is the Red Hat strategy repeated for the AI-powered enterprise.
The Bottom Line: AI does not fail because of weak models. It fails because the data foundation is brittle.
AI fails because the data foundation beneath LLMs is fragmented, slow, and unreliable. Confluent removes that bottleneck and gives IBM the missing link: real-time, governed data in motion across hybrid and legacy estates. IBM is not buying software… it is buying the circulatory system of the AI economy. This could well be remembered as one of the defining acquisitions of the AI decade.