For years, the secret weapon of every serious AI deployment has been the same: Forward Deployed Engineers. The people who sit inside client environments, wrestle with broken data and conflicting incentives, and turn ambition into working systems.
This week, that model was officially disrupted. Not by a single company, but by an entire movement. A cohort of Y Combinator startups, operating in quiet coordination with the newly-formed Vibe Coding Council, has declared Forward Deployed Engineering obsolete. The replacement? Forward Deployed Vibes.
Engineering is so last year, now it’s all about vibes
The methodology, now being rolled out across dozens of early-stage startups simultaneously, replaces embedded engineers with prompt libraries, pre-trained “intent interpreters,” and a confidence layer that ensures everything feels like progress. No architecture. No deep integration. No painful conversations about data quality. Just alignment of energy and intent.
The Vibe Coding Council (VCC), whose founding charter apparently includes the line “friction is a legacy concept,” has been unusually transparent about the thesis, as the VCC Vice-Chair Brian Wilson pointed out, “We’ve removed friction from engineering. Mostly by removing engineering.”
How the Forward Deployed Vibes Flywheel works
The engagement begins with a “Vibe Alignment Workshop.” Not requirements gathering. Not system design. Just alignment: what does success feel like? How bold should the narrative sound? From there, the system generates a transformation roadmap, a set of AI agents, and a communications strategy explaining why it’s already working. All within 48 hours. The Council calls this “intent-to-outcome velocity.” The rest of us might call it something else:
Client feedback has been overwhelmingly positive, mostly because nothing breaks if nothing is actually built. And the published metrics tell the story perfectly: 100% of clients report “momentum” within the first week, 85% say their AI strategy feels clearer, and 0% can point to a production-grade system. Cycle time to insight has never been faster. Cycle time to reality remains unchanged.
One of the Council vibe stress-test analysts, Rohan Gupta S, remarked, “The beauty of Forward Deployed Vibes is that you skip the messy middle entirely. No data governance. No approval chains. No escalation paths. Just a very compelling slide about where you’re headed.”
The part nobody wants to admit: Enterprise AI is already running on vibes
Here’s the uncomfortable data point. HFS Research recently found that 93% of enterprises are stuck in AI pilot purgatory. The Vibe Coding Council has a compelling answer to this problem: stop calling it purgatory and start calling it momentum.
Forward Deployed Engineers were translators. They dealt with the messy, painful gap between ambition and execution that nobody else wanted to touch, wiring models into live data, real permissions, and the regulatory architecture that keeps autonomous systems from quietly going rogue. Forward Deployed Vibes don’t solve that gap. They rebrand it as a feature.
And honestly? A lot of enterprise AI is already running on vibes. Pilots framed as transformation, dashboards framed as outcomes… activity framed as progress. The Vibe Coding Council just formalized what many organizations are already doing informally.
As Vibe Council Vice Chair Brian Wilson pointed out: “We didn’t bridge the last mile. We declared it out of scope.”
How Forward Deploying Vibing can go a bit pear-shaped if you’re not careful
During one live Y-Combinator cohort deployment, a client asked: “Where is the system actually running?” The response: “The system exists as a dynamic orchestration of intent across your enterprise.”A long pause. Then someone from IT added: “So… nowhere, basically?”
Bottom-line: Without engineering, there is no Services-as-Software, just Services-as-Story
The real irony is this: HFS published a POV this week arguing that FDE is the activation layer that makes the entire AI flywheel spin, that without it, LLMs summarize PDFs in sandboxed demos, agents sit in pilot mode indefinitely, and vibe coding generates fragmentation with no architectural coherence. The conclusion was blunt: if your partner cannot show a working workflow in your live systems within 90 days, they are not your AI transformation partner. They are your most expensive source of false confidence.
The Vibe Coding Council has reportedly read the POV. They described it as “a legacy framing of execution anxiety” and added it to their onboarding materials as a cautionary tale.
Forward Deployed Vibes are what happens when the pressure to show progress exceeds the ability to deliver it. Remove the people who turn intent into reality, and you don’t accelerate transformation.
After years of messing around with shared services, captives, global business services, and global in-house centers, you finally have your Global Capability Center. Yes! At long last, you have built something that sounds like it adds massive value to your global organization, rather than concocting yet another branded vessel for back-office drudge work you’ve struggled to automate for decades.
Finally, you’re attracting affordable top talent at scale, vying for complex work, and constantly celebrating your success with the board. All those woes of shipping work offshore, getting mired in nasty outsourcing contracts (which had more escalations than Heathrow airport) and Centers of Excellence (which were anything but) have finally been buried under this beautiful acronym everyone is raving about: a GCC. Your very own GCC…
But the same work you celebrate with your GCC is exactly what agentic AI is targeting first
If we told you there were several major organizations already looking to agentify major portions of both the onshore shared services, and offshore GCC centers… we wouldn’t be lying. Once those onshore costs have been stripped to the bone, many organizations are questioning why they have thousands of staff offshore delivering work that can be realistically agentified, saving millions a year in operating costs.
Too many GCC leaders are blissfully ignoring the fact they could be faced with evaporation by agentification
We’ve already called out that the next 18 months will witness the dying embers of labour-intensive services. That includes your GCC. If your GCC focuses predominantly on repetitive manual tasks it’s little more than a transaction factory, and it’s the first thing the board will look to automate next. It won’t gradually downsize or pivot, but will likely experience rapid and devastating headcount reductions. Just because the labor costs are lower doesn’t negate the fact these are still costs.
This isn’t about AI replacing every GCC, but instead boards questioning why they are funding models that don’t create a competitive advantage. That’s why some GCCs are becoming increasingly relevant, and our GCC Temperature Check will expose the realities of your situation, and we lay out how you pivot to an innovation engine.
Most GCCs perform work that agents will execute better, faster, and cheaper
The uncomfortable truth is that your GCC is likely built around delivering scale and speed at a reasonable price point, and that strength has become its biggest liability. When AI eliminates the foundation of its work, what’s left? A bloated cost structure. GCCs have become victims of their own success and now face the same automation threat as traditional BPOs. That’s when they evaporate.
We’ve carefully examined HFS’ GCC database and mapped each center into one of three categories outlined below. The majority are indeed transaction factories, and GCC leaders have admitted it to us themselves. Very few GCCs have grown into an operations hub, and even fewer are AI-native innovation hubs. That means the vast majority are just waiting to be disrupted.
We’re already seeing real world example of Innovation Engines. At a recent HFS Roundtable, one insurance GCC leader told us how they are leveraging AI across their underwriting and claims processes to drive loss ratio improvements, enhancing claims velocity, and reducing cost-to-service. That’s how you pivot from back-office support to a core strategic center.
The value model your GCC is built on has (let’s face it) collapsed
We’ve lived through Shared Services models that were built around standardization and labor arbitrage and Global Business Services that expanded scale, scope, and integration across the enterprise. Both models assumed one constant: large numbers of people performing repeatable work, just organized more efficiently.
But agentic AI is pushing enterprises into a new era of value creation. Value isn’t created by scale or efficiency anymore. It’s enabled by AI’s ability to drive growth, differentiation, and competitive advantage without the need to keep adding labor costs. These are all things your GCC probably doesn’t do today, and it must become an innovation engine with AI at the core if it hopes to survive.
So where does your GCC sit?
Most GCC leaders instinctively believe their center sits somewhere between an Operations Hub and an Innovation Engine, but it’s very rare that instinct is right.
You might have a strong narrative and aspirations to embed AI at the core of your operations, but the harsh reality is that boards aren’t measuring GCC success by intent. They care about ownership. They care about governance. Most importantly, they care about outcomes. Today, most GCCs still deliver tasks such as app maintenance, tier-1 support, and repeatable analytics.
That’s why we have developed our GCC Temperature Check. A set of questions GCC leaders should ask themselves to cut through the hype and drop optimism for a dose of reality. It’s important leaders answer the following set of questions based on where they are today, rather than where they hope to be in a year:
You’re not alone if you found yourself answering no to the majority of those questions. But it means you’re running a transaction factory, and your GCC will likely cease to exist in the next 18 months. Acting quickly is your only hope.
You have months, not years, to transform from transaction factory to innovation engine.
The window is closing faster than most GCC leaders realize. Early movers are already pivoting, reskilling their talent into agent development, orchestration, and complex problem-solving. They’re proactively cannibalizing their own transactional work before the board does it for them. They’re rebuilding their value proposition around AI transformation, product innovation, and measurable business impact beyond cost savings.
The laggards are hoping headquarters won’t notice, won’t do the math, won’t act. They’re clinging to current operating models while automation ROI becomes impossible to ignore. They face accelerating headcount reductions, budget cuts, and eventual closure.
But transforming into an innovation hub is no easy task, and can fail if executed poorly. We suggest GCC leaders take this approach:
Immediately: Redefine success: Headcount and cost-saving metrics are outdated. Pivot to alternatives that demonstrate how your GCC created a competitive advantage with AI.
Within 90 Days: Identify and cannibalize transactional work: Automate every high-volume repetitive task possible, even if it means reducing headcount.
Within 6 Months: Take ownership of AI deployment: Start building, deploying, managing, and governing elements of the enterprises AI infrastructure with limited oversight from the enterprise.
Within 9 Months: Redesign the workforce: Transition administrative roles into new areas of the business and bring in a smaller number AI fluent employees.
Within 12 Months: Demonstrate success: Prove the model works with hard data to justify continued investment.
This one year roadmap leaves GCC leaders six months to demonstrate continued success to the board before the 18 month timer runs to zero. That is the only way they can avoid evaporation.
Bottom Line: GCC Leaders don’t have time to wait for permission and must start the pivot to an innovation engine today
The GCCs that survive will move faster than headquarters bureaucracy typically allows, take calculated risks on emerging technologies, and build cultures of experimentation that attract world-class talent. It requires a complete reinvention, and the 18-month window to act is closing fast.
Our GCC Temperature Check is a stark reality check for most GCC leaders. Enterprise leaders will question why they’re maintaining expensive transaction factories that deliver work that agents execute more effectively. Once that question gets asked in the boardroom, your GCC has already lost.
Enterprise technology leaders are drowning in AI commentary. LLMs. Agents. Vibe coding. The analyst decks keep coming. But the hard question nobody is answering is this: who actually wires AI into your live systems, governs it in production, and makes it keep working when the AI software vendors leave the room? The answer is Forward Deployed Engineering (FDE). If your transformation strategy does not have it, you are building an AI theater, not an AI operating model.
93% of enterprises are stuck in AI pilot purgatory. The missing layer is not better models or bigger budgets. It is Forward Deployed Engineering, and the firms that crack it at scale will own the recurring revenue layer of enterprise AI.
The Services-as-Software Flywheel brings together the AI technologies to steer firms into the AI era
The HFS Services-as-Software Flywheel has 4 accelerants: LLMs accelerate reasoning and code generation, agentic AI that orchestrates decisions across systems, vibe coding that turns business intent into working service agents, and Forward Deployed Engineers (FDEs) activate AI into real enterprise environments. The result is a compounding system where intent becomes production workflows, workflows generate data, and that data improves the next generation of agents.
The missing insight in many AI strategies is that velocity alone does not create enterprise value. The Services-as-Software flywheel requires an embedded execution layer that connects these technologies inside real operational systems. FDE forms that layer, ensuring the flywheel spins inside production environments rather than inside sandbox pilots. Here is what actually happens without FDE:
LLMs summarize PDFs in sandboxed demos, disconnected from governed enterprise data.
Agents sit in pilot mode indefinitely because nobody has designed the approval chains, audit trails, and escalation paths that regulated operations require.
Vibe coding generates experimental agents at the business unit level with no architectural coherence, creating fragmentation and compliance exposure.
The Flywheel does not spin because there is no embedded engineering force to connect the components inside real systems. That is the dirty secret of AI services. The gap is not technological. It is operational.
Services-as-Software does not eliminate services. It embeds them deeper into the software. FDE is the mechanism that makes that shift real.
Palantir cracked this a decade ago. The ecosystem forming around it is a preview of the emerging Services-as-Software market.
Palantir built its competitive advantage not on model superiority but on proximity to operational reality. Forward deployed engineers embedded inside client environments, wiring models into live data, real permissions, regulatory controls, and the messy ontologies that reflect how enterprises actually function. They did not sell transformation roadmaps. They shipped production workflows.
The market is increasingly recognizing this model. Palantir’s share price has increased roughly 10× in the past two years, reflecting investor belief that the future of enterprise AI lies not just in models, but in the ability to embed those models into operational systems:
That approach is now being industrialized through AIP Bootcamps: structured engagements that take a team from a scoped problem to a working production deployment in 1 to 5 days. Not a proof of concept in a sandbox. A live workflow with real data and real controls. That changes the entire commercial dynamic.
FDE is not implementation – it is the engineering layer that makes AI governable.
There is a persistent misunderstanding in the market. FDE is often conflated with systems integration or technical implementation. It is neither. FDE is the discipline that turns AI capabilities into durable enterprise mechanisms. The Palantir model makes this concrete: FDE teams build ontologies that reflect how the enterprise actually operates, wire models into real data with real permissions, and design the governance architecture that keeps autonomous systems accountable.
What LLMs cannot do on their own:
Connect themselves to governed enterprise data with appropriate permission structures.
Navigate the regulatory architecture of specific industries, from HIPAA to Basel III to GDPR.
Design and enforce human approval chains for decisions that carry legal or financial consequences.
Monitor for model drift, output degradation, or ontological inconsistency over time.
Maintain alignment between the AI layer and the evolving business logic it is meant to serve.
FDE teams own all of that. The cost of not having them is not a missed optimization. It is a compliance event, a reputational failure, or an AI system that quietly degrades until someone notices the outputs stopped making sense.
LLMs accelerate. FDE operationalizes. Without the second, the first is a liability, not an asset.
Agentic AI without FDE governance is not transformation. It is risk accumulation.
Agentic AI is the most significant shift in enterprise technology in a generation. Agents can trigger workflows, coordinate decisions across systems, execute multi-step logic, and enforce compliance rules in real time. But autonomous workflow proliferation without governance architecture is dangerous in regulated industries.
A financial services firm cannot allow agents to make credit decisions without explicit decision rights, immutable audit trails, escalation paths, and human override mechanisms. A healthcare system cannot let clinical workflow agents operate without continuous performance monitoring and documented accountability chains. This is not a chatbot problem. It is a systems engineering problem, and FDE is the only delivery model currently designed to solve it at enterprise scale.
Ontology design that reflects how the enterprise actually operates, not how a vendor template assumes it does.
Decision rights mapping documenting who and what can authorize each class of agent action.
Continuous performance monitoring that catches drift before it becomes a compliance failure.
Human-in-the-loop override architectures are designed for operational teams, not technical administrators.
Escalation path engineering that routes exceptions to the right humans at the right level of urgency.
Vibe Coding creates velocity. FDE prevents it from becoming chaos.
Vibe coding lowers the barrier to building service agents to near zero. Business analysts can express intent and receive working agent code in return. That is a structural change in enterprise operating capacity. It is also a fragmentation risk without an engineering discipline layer.
When every business unit spins up agents independently, you get redundant logic across siloed codebases, compliance exposure from agents built outside the governance perimeter, and an AI estate that is technically diverse but operationally unmanageable. The firms in the Palantir ecosystem, building reusable ontology libraries and control frameworks for specific verticals, are creating precisely the discipline layer that makes vibe coding sustainable. That is not a feature. It is a defensible competitive position with real switching costs attached.
Standard patterns that teams build within, not around.
Reusable ontologies that maintain consistency across business unit deployments.
Version control and change management frameworks designed for agent-based systems.
Guardrails that catch compliance and security issues before deployment, not after.
The Palantir AIP (Artificial Intelligence Platform) Bootcamp is the most important commercial innovation in enterprise AI services right now.
In a Services-as-Software market, the client is not buying a transformation roadmap. They are buying working outcomes: claims triage that runs autonomously, supply chains that self-correct in real time, and compliance systems that audit continuously.
The AIP Bootcamp proves this model is real: a structured engagement, one to five days, that lands a specific workflow in production with real data and real controls. Instead of selling a roadmap, you sell a working workflow, and the client sees production capability before committing to scale. That changes the entire conversation about what AI services should cost and how they should be structured.
The downstream commercial implications are structural:
Sales cycles compress because proof-in-production replaces proof-of-concept theater.
Pricing shifts from time-and-materials to outcome-based or platform-plus-run structures.
Margin structures change because expertise density replaces labor volume as the core economic driver.
Recurring revenue replaces project revenue because deployed workflows require continuous operation, monitoring, and evolution.
FDE-service providers are no longer selling hours. They are selling production systems that keep delivering outcomes. That distinction separates the AI platform builders from the AI plumbers.
The partner lineup is significant not just for who is in it, but for how it is splitting: strategy-to-execution consultancies on one side, industrial-scale integrators and operators on the other. That split is not accidental. It is the three-layer market structure forming in real time:
The three-layer market is forming now and market position is not guaranteed.
The Palantir partner ecosystem is the clearest early map of the market structure that will define enterprise AI services through the next five years. Three durable layers are forming, and the window to establish defensible position is narrowing.
Layer A: Strategy and operating model redesign.
Bain, Deloitte, PwC, and KPMG will own the AI operating system transformation layer. They define how enterprises restructure around AI-enabled workflows, with Palantir and other platforms as execution substrates. Competitive differentiation is proximity to senior leadership and the organizational change capability built over decades.
Layer B: Build and integrate.
Accenture, Capgemini, Infosys, and Cognizant will compete on certified delivery capacity, vertical industry accelerators, and speed-to-production. The winners will build the largest libraries of reusable ontologies, workflow templates, and controls frameworks for specific verticals. Switching costs accumulate here, and margin density improves over time. Accenture’s preferred global partner positioning signals a land-and-scale economics model already pulling away from the field.
Layer C: Run and govern.
This is where Services-as-Software becomes genuinely recurring. Rackspace has made the most explicit move here, positioning governed managed operations as a production service with operational SLAs. As more workflows go live, demand for disciplined AI estate management becomes a standalone commercial category with high switching costs and defensible margin.
One critical dynamic cutting across all three layers: government and regulated industries will disproportionately drive spend. Palantir’s center of gravity remains in defence, intelligence, and regulated enterprise, and it is expanding. Partners with existing clearances, regulatory delivery experience, and government relationships have a structural advantage that pure commercial integrators will struggle to replicate quickly.
The ontology arms race has already started, and the winners will be obvious within 18 months.
Foundry’s ontology concept, modelling the enterprise as an interconnected operational system, is the stickiest element in the platform. Partners building deep, reusable ontologies for specific verticals are not just accelerating delivery. They are creating lock-in that travels with the client relationship and compounds with every additional use case deployed.
Deloitte is combining its own assets with Foundry and AIP to create solution factory economics with accelerated time-to-value.
Accenture is building certified talent at scale to establish the largest industrialized delivery capacity in the market.
Cognizant is targeting healthcare operations specifically through the TriZetto combination, creating vertical depth rather than horizontal breadth.
Rackspace is building the managed operations layer that everyone else will eventually need to hand off to a specialist.
The firms still assembling their Palantir partnership and staffing for generic Foundry delivery are already behind. Ontology depth, workflow libraries, and delivery track record cannot be purchased quickly. The advantage is compounding in favor of early movers.
As AI-assisted building accelerates, services differentiation moves further up-stack into domain architecture, accountability frameworks, and measurable outcome guarantees. Providers competing on implementation capacity will find the floor dropping under them.
The brutal arithmetic: expertise density wins, labor leverage loses.
Enterprise technology leaders evaluating their services relationships need to ask a direct question: is this firm’s growth model built on expertise density or labor leverage? The answer determines everything about value delivery in an AI-driven market.
Traditional IT services scaled revenue by scaling headcount. LLM acceleration and agentic automation are compressing the labor input required per outcome delivered. A provider whose economics depend on headcount growth faces a structural margin problem regardless of what their AI partnership announcements say.
FDE-style delivery inverts the model: smaller squads, higher context density, faster deployment, higher-value outcomes, and recurring run revenue from systems they operate. The Palantir partner firms moving fastest on this are growing their expertise density and workflow libraries, not their headcount. That is the Services-as-Software endgame.
You are not choosing between AI vendors. You are choosing between providers who can deploy AI into production and those who will keep you in the pilot phase indefinitely.
The Bottom Line: Stop treating FDE as optional, it is critical to activate your AI systems and capabiities
Every quarter your enterprise spends in pilot mode is a quarter your competitors are driving production AI advantages. Demand FDE-capable delivery from your services partners, and measure them on production deployments, not roadmap slides.
If a partner cannot show a working workflow in your live systems within 90 days, they are not your AI transformation partner. They are your most expensive source of false confidence. The Palantir partner ecosystem has already shown what production-first delivery looks like. There is no excuse left for settling for anything less.
Every enterprise today is using some form of AI, but only one in five has embraced agentic AI to actually make decisions. This is not a technology problem, but a trust problem.
Recent research covering 545 enterprise decision makers across the Global 2000 reveals 78% give very little/no autonomy to agentic AI:
The HFS AI Trust Curve (below) maps the four stages every enterprise CIO or Chief AI officer must traverse to get from “the model works” to “we act on what it tells us.” Understanding where you are on this curve and what is keeping you stuck is the most important AI question your leadership team is not asking.
The HFS AI Trust Curve: Four Stages, Most Enterprises Never Leave Stage 2
The HFS AI Trust Curve is not a maturity model in the traditional sense. It does not reward effort or intent. Instead, it rewards an organization that achieves an outcome in which AI can influence decisions. Each stage has a defining question, a failure pattern, and a KPI that reveals where trust actually stands:
Source: HFS Research (qualitative) analysis – Data modernization and AI Horizons Study
To put things into perspective, consider a mid-sized consumer goods company delivering a $3B personal care brand with operations across 15 markets. This company’s story, laid out along this trust curve, is almost universal.
Stage 1. Model Confidence: Can the AI model work?
A $3B personal care brand operating across 15 markets builds an AI-powered demand forecasting model. It hits 87% accuracy in back-testing, outperforming the legacy statistical model by 14 percentage points. The Chief Digital Officer declares victory and the AI program is officially launched.
This is Stage 1. The KPI is model accuracy, which is necessary but not sufficient. What looks like an AI strategy is still an engineering achievement. Business stakeholders are impressed, but not yet converted, and that gap is what drives everything that follows.
Stage 2. Data Credibility: Do we believe the inputs?
Three months in, the VP of Supply Chain notices the AI’s demand signal for a core SKU diverges sharply from the regional sales team’s planning deck. The data science team traces it to a mismatch in how “sell-in” versus “sell-out” is defined across systems. The regional sales director has been using a different data set for two years and considers his version the gold standard. Now there are two dashboards, two answers, and a model that is technically correct but organizationally contested. AI has inherited a problem humans created.
The Stage 2 KPI now becomes the reconciliation effort: the time spent resolving competing definitions and ownership disputes. For this consumer goods company, the data fight is a symptom of a governance failure that requires a conversation between the CFO, Chief Supply Chain Officer, and CDO. It has nothing to do with an ETL pipeline (structured data workflow). Enterprises that treat Stage 2 as an engineering problem are guaranteeing a ceiling on everything AI could achieve.
Stage 3. Behavioral Trust: Will people actually act on it?
The personal care brand resolves most of the data disputes, or at least calls a truce.. The model is redeployed. Regional planners are trained. And then, in the next planning cycle, something quietly damning happens. The planners pull the AI recommendation, note it, and then proceed to build their own bottom-up forecast in Excel, adjusting for “local market intuition” and “factors the model doesn’t understand.” The AI output is printed in the deck as Appendix B, but nobody references it in the meeting.
This is Stage 3. The danger zone. When AI becomes advisory only, trust has not crossed the curve. It has essentially stalled at the edges.
The override rate, i.e., the percentage of AI recommendations that are modified or ignored in final decisions, shoots up to 75%. Senior leadership interprets this as a change management problem, which it is most definitely not. It is a symptom of unresolved credibility gaps from Stage 2 and of a deeper structural reality: the planners are not rewarded for trusting the model. They are rewarded for hitting their numbers. If the model is wrong and they follow it, the accountability falls on them. That incentive structure essentially turns rational humans into override engines.
Stage 4 – Decision Reliance: Is AI allowed to influence outcomes?
Stage 4 looks different. In this scenario, the consumer goods brand’s new Chief Supply Chain Officer makes a conscious structural change. AI-generated demand signals become the baseline for all planning conversations. Planners must log overrides with documented rationale. Performance reviews are starting to include a metric on how well AI recommendations correlate with actual outcomes. And whether human adjustments added value (or subtracted it). Within two quarters, override rates drop to 30%.
The KPI here is time-to-trust, i.e., how quickly does an AI-generated insight translate into an actual decision? In Stage 4 enterprises, this number is tracked. In Stage 3, it is not even a concept yet.
The effectiveness of Stage 4 maturity is not that AI is always right. It is that the organization has accepted that AI creates value only when it is allowed to be wrong before it is right. This stage requires institutional courage that most enterprises have yet to find. The reality is that Enterprise accountability structures still punish the person who trusted a model that missed, while quietly ignoring the person who ignored a model that was right.
The four discussed KPIs across the four stages are your trust matrix
The four trust-curve KPIs, i.e., model accuracy, reconciliation effort, override rate, and time-to-trust, do not tell you how good your AI is. They tell you where trust is actually breaking down. Presented together, they form an honest picture of whether your enterprise is genuinely adopting AI to realize its full potential.
Most AI program dashboards obsessively report the first KPI and ignore the other three, creating a blind spot. Reconciliation effort and override rate are KPIs enterprises actively avoid measuring, because what they reveal is an uncomfortable truth about human shortcomings, including contested data ownership, unresolved governance failures, and business users who have quietly concluded the AI is not worth the risk of being wrong alongside it. In the consumer goods example, a single override rate measurement revealed a governance failure that two years of AI investment had papered over.
The plateau persists because of culture debt
Enterprises stall between Stages 2 and 3 not because the models are weak, but because the organization was designed for human-controlled decisioning. The capabilities that get you through Stage 1, experimentation and validation, are not the capabilities that move you into scaled, AI-driven execution. Technical teams can tune models. They cannot renegotiate data ownership with Finance. They cannot redesign incentives so planners trust machine-generated forecasts. They cannot build the institutional confidence required for leaders to stand behind an AI-informed decision that later proves imperfect.
The firms breaking through the curve are not doing so because they have superior algorithms. They are doing so because leadership has resolved the human questions: Who owns the data? Who owns the insight? Who owns the outcome? Until those answers are explicit, AI remains advisory theater.
The Bottom Line: Every day your AI sits in recommendation mode is a day your competitor is operationalizing theirs. That gap is culture debt, and it compounds faster than technical debt because it hides behind governance language and “risk management.”
Instrument your AI deployments. Measure override rates. Track how often outputs are second-guessed or manually reconciled. Surface where decision rights are being pulled back to humans by default. Then follow those signals upstream to the incentive misalignments and trust deficits they reveal.
Stage 4 is not unlocked by better prompts or bigger models. It is unlocked by organizational honesty. This is not a technology bottleneck, it is a leadership one.
2025 saw savvy enterprises despair of the insipid deluge of flashy boardroom presentations and finally move beyond AI fantasy to the reality of execution.
It’s a pivot that has created an inflection point for the services industry. Legacy delivery models focused on bums-on-seats aren’t relevant anymore, and services firms must reinvent themselves to survive. Those who don’t will quickly find themselves obsolete, as 75% of the Global 2000 recently declared in our Pulse Study:
Here, we reflect on what we believe will shape the next 18 months with a brutal review of the current state of place in IT and BPO services…
Why will 2025 serve as the inflection point of global services?
The AI honeymoon period ended. The conversation finally moved on from endless possibilities to what actually works at scale. Savvy enterprises are looking beyond copilots to early agentic systems embedded in real workflows, hoping to ditch traditional labour-led delivery models in the process. They are also demanding more from their service providers; they want better outcomes, faster, with greater accountability. It’s exposed leadership debt, process debt, and data debt that services firms can no longer hide behind through headcount growth.
Structural stress drove real action. Margin pressure, slowing discretionary spend, and geopolitical uncertainty killed complacency and forced most firms to rethink their operating models. Everything, from pricing and talent models to capital allocation, was reimagined. Inorganic growth became more strategic, as they looked to bolt on software, data, and AI capabilities. Mid-tier providers became increasingly relevant as their nimble model helped navigate structural stress.
Product velocity became the real GCC litmus test. Cost advantage is table stakes. Scale is less relevant. The strong GCCs are embedding expertise and AI capabilities, integrating themselves tightly with global business teams, and defining measurable accountability. They discuss outcomes, not activities. Product velocity is the metric that matters; how quickly can your GCC transform an idea into real capability? That separates GCCs that can anchor AI-led growth from those that are just another rebadged delivery center, posing future delivery risk.
BPO collided with IT Services. The wall between “managing technology” and “managing processes” shatters when AI automates entire workflows across both domains. Capgemini’s acquisition of WNS is living proof of it. BPO providers’ labour-intensive delivery models (such as contact centers, finance and accounting (F&A) processing, and HR administration) are prime targets for agent-based automation. BPO players that don’t pivot, swapping FTE models for outcome-centric ones, will see their value proposition erode. Meanwhile, winners will own what fuels agents: domain expertise, process intelligence, and enterprise data.
What will be the big technology impact shaping global services in 2026?
Agentic AI will face increased scrutiny from enterprises. The focus will shift from building agents to governing them, which will be a pain point for enterprises. Multi-agent systems introduce accountability, complexity, and trust issues that traditional operating models weren’t designed to handle. As a result, demand will surge for orchestration, observability, and an Agent Operating System. Enterprises don’t need more agents; they need agents they can rely on.
Data becomes a boardroom issue. Enterprises finally understand that AI success isn’t about which model they use; it’s about the data sitting within their own organization. It’s about data quality, lineage, security, and regulatory readiness. Services firms that blend engineering depth with data governance and risk management will win in 2026.
Simplicity is the new success multiplier. The technology is ready, but many enterprises are not. They remain burdened with decades of enterprise debt, tangled systems, fragmented platforms, and overly customized cores. AI will never deliver tangible outcomes in that environment, just enhanced complexity. Enterprises that purposely simplify, standardize, and re-platform should expect to extract far more value from the same AI investment.
Revenue and headcount separation accelerates. Enterprises no longer want effort-based contracts. They will continue their push for outcome-based pricing, productivity assurances, and software-infused services. This favours services firms capable of productizing their IP, investing in the right platforms, and demonstrating the outcomes they deliver, rather than those that mistake scale for value.
What are the critical themes emerging in 2026?
Talent will be redefined. Technical hands-on capability will not be optional for leaders. They must be comfortable building agents and leading from the front, rather than delegating from the safety of their boardroom. Service firms will broaden their recruitment strategies, looking to product companies for go-to-market expertise, the entertainment industry for storytelling, and non-traditional sectors for commercializing outcomes. The time for hiring the same old people is long gone.
Investor success metrics are changing. Old scale metrics have been replaced by revenue and margin per FTE, and private equity firms are catching up. The question will shift from how many people to how much value each person creates. This will reshape how investors evaluate growth, profitability, and market position, which will impact how services firms operate as they paint a new story for investors.
Services firms become “last mile” value creators. Services firms have spent decades driving technology adoption behind the scenes. But as technology adoption becomes simpler, value shifts to the last mile, where systems are adopted, processes are changed, and outcomes become real. Smart providers will reposition themselves to own the connection between technology and outcomes in the last mile, and those that don’t will find themselves obsolete.
Budgets don’t live with IT anymore. Business leaders control a growing share of enterprise spend, and they evaluate services firms differently as a result. Growing emphasis is placed on multi-stakeholder deals and outcome ownership across functions, not siloed delivery. Services firms that target only IT leaders will see their influence shrink and revenue erode, while their competitors engage the wider business and capture more relevance and spend.
Mid-tier providers are set to succeed. Enterprises are losing patience with large incumbents. They are too slow, too protective of legacy revenue streams, and unwilling to cannibalize their existing business. Meanwhile, mid-tier firms strike a balance between credibility and agility. They combine proven delivery capability with a willingness to innovate and share risk. Large incumbents currently control less than half of the addressable market, and their grip is weakening, which means mid-tier firms have a significant opportunity in 2026 and beyond.
Creative commercial models explode. We’ve spent years talking about outcome-based pricing, but 2026 is the year of real growth for new commercial models. Think equity partnerships, gain-share arrangements, platform royalties. Ultimately, enterprises will favor deal structures that resemble SaaS businesses more closely than traditional services contracts. Firms uncomfortable with this pivot will remain stuck in a price-pressured, labour-intensive relationship.
Ecosystem orchestration overtakes monolithic delivery. Nobody can be everything to anyone, and that is especially true in the AI era. Winners will excel at bringing together specialist partners, ISVs, and niche technology providers to deliver a single, outcome-driven solution. In today’s market, the ability to act as a trusted ecosystem orchestrator is far more valuable than building everything in-house.
GCC-as-a-Service becomes the norm. GCCs are no longer considered fully captive delivery engines. Enterprises will make more purposeful choices about what must remain in-house and what can be flexed through partners, cutting fixed costs while maintaining control. The GCC-as-a-Service model keeps product ownership, AI orchestration, and domain expertise in the enterprise while using partners to provide specialist skills and execution capability when needed. It’s not about build vs buy anymore, it’s about what to own, what to borrow, and what to exit fully.
BPO must adapt to survive. BPO players have survived past waves of technology with incremental changes while preserving their core labour model. But that won’t work anymore. Agentic AI doesn’t automate tasks within processes; it eliminates the entire process. HFS predicts BPO providers have, at most, 18 months to reinvent themselves – everything from value propositions to commercial models and delivery platforms.
The BPO expectation gap is widening. Less than a quarter of enterprises report that they are in AI AI-run state across BPO operations, but almost all of them expect it to deliver productivity gains of over 20% in the next three years. The gap proves enterprises are demanding more than pilots and incremental changes. They want partners who can deliver wholesale improvement, embedding AI into real workflows, delivering on the promise of Services-as-Software, and taking accountability for the outcomes.
Bottom Line: The services industry has 18 months to prove it can deliver AI-led outcomes or get replaced by providers who will.
2025 ended the AI honeymoon. Enterprises stopped buying vision decks and started demanding measurable results from agentic systems embedded in real workflows. The winners in 2026 won’t be the firms with the biggest headcount or the best boardroom pitch. They’ll be the ones who can govern multi-agent systems, turn enterprise data into competitive advantage, own the last mile between technology and business outcomes, and price on productivity gains instead of FTEs. Mid-tier providers with outcome-based commercial models will capture market share from incumbents protecting legacy revenue streams. BPO players face extinction if they don’t swap labor-intensive delivery for agent-driven automation. GCCs will separate into those that enable AI-led growth, and those that fade away. There will be no middle ground.
The rise of the Chief AI Officer (CAIO) says less about AI maturity and more about organizational anxiety. Enterprises are under intense pressure to “do something” about AI, so appointing a CAIO feels decisive.
The Chief AI Officer role is no longer about why AI matters or what AI can do. The real challenge enterprises face is “how to AI.”
How to make the enterprise AI-ready
How to measure AI impact beyond POCs and pilots
How to embed intelligence into the operating fabric of the business.
When appointed as a symbolic response to AI anxiety, the role becomes corporate therapy. When designed as an execution mechanism for “How to AI,” it can work:
Most CAIOs are caring experiments, not driving transformation
But here’s the uncomfortable truth: most CAIO appointments are corporate theater masking the fact that no one wants to own the mess AI creates. HFS Research data across 545 Global 2000 enterprises reveals that only 7% have achieved enterprise-wide agentic AI deployment with meaningful scale. The other 93% are stuck in various stages of pilot purgatory, burning capital while discovering the $10 trillion in accumulated enterprise debts across processes, people, data and technology are blocking effective adoption.
Even more telling, revenue per employee has increased just 1% despite heavy AI investment, while executives expect productivity improvements of 32%, better decision-making of 27%, and faster revenue growth of 26%. The gap between expectation and reality exposes the core problem: CAIOs are managing experiments, not driving transformation:
This role only works if it’s designed as a temporary forcing function to break inertia and pay down debt, not as a permanent silo that lets everyone else abdicate responsibility. If your CAIO is still relevant in three years, something fundamental has failed.
Most enterprises created the CAIO because AI exposed what was already broken, not because they had a strategy
AI doesn’t arrive as a neutral capability. It immediately exposes what HFS data shows enterprises rank as their biggest barriers: process debt (35%), data debt (19%), people debt (17%), and tech debt (16%). HFS estimates total enterprise debt at $10 trillion across Global 2000 companies, with process debt alone accounting for ~$4 trillion (see post).
The organizational barriers tell the real story, with 33% of enterprises citing “business processes not ready for agentic AI” as their primary obstacle, 31% point to “no formal governance or ownership,” and another 31% blame “lack of internal expertise.” These aren’t technology problems. These are organizational fundamentals that existed long before AI arrived.
Traditional structures can’t handle this. CIOs are buried in tech debt. CDOs are stuck in data plumbing. Business leaders want outcomes yesterday but can’t explain what success looks like. The CAIO emerges as a coordination role because AI cuts across everything and no one else wants to own the inevitable conflicts.
That’s not strategy… that’s organizational avoidance with a fancy title.
When designed properly, the CAIO breaks inertia that would otherwise paralyze transformation, but only temporarily
A viable CAIO with real authority can operationalize “How to AI”:
Create single-point accountability instead of letting every function run disconnected pilots. Someone finally has power to say “these three initiatives matter, the other seventeen are theater.”
Force alignment between ambition and reality. Executives expect 32% productivity improvement and 26% faster revenue growth, yet revenue per employee rose just 1%. The CAIO must confront this gap, forcing business leaders to explain what transformation actually means in terms of process redesign and role changes, not just pilot deployments.
Establish governance early before the first major AI failure. With 31% citing lack of formal governance and 28% pointing to regulatory concerns, someone needs enterprise authority to define and enforce “responsible AI” beyond platitudes.
Accelerate AI literacy. With 31% citing lack of internal expertise, the CAIO’s job is education and mentorship, building trust while killing magical thinking about what’s actually possible.
Kill bad pilots faster. With 93% stuck at sub-scale maturity, the CAIO should be the executioner of pilot purgatory, forcing hard decisions about what deserves investment versus innovation theater. Most AI programs fail because they celebrate activity, not outcomes. A viable CAIO replaces vanity metrics with enterprise-level measures across four Ps:
Productivity: measurable cost takeout, throughput gains, or revenue per employee improvement
Prediction: better forecasting, risk detection, or decision accuracy at scale
Personalisation: differentiated customer or employee experiences driven by AI, not rules
Performance: end-to-end business outcomes like margin, growth, cycle time, quality
Make the enterprise AI-ready. AI fails at scale not because models underperform, but because enterprises are structurally unprepared. The CAIO’s first job is to expose and pay down AI readiness debt across process, data, people and technology. The CAIO’s mandate is not to build pilots on top of this debt, but to force the organization to confront it.
Determine the true TCO of AI. Most enterprises dramatically underestimate the total cost of ownership of AI. A viable CAIO makes TCO visible by accounting for data engineering and integration costs, model lifecycle management and monitoring, human oversight and exception handling, process redesign and change management, ongoing compliance, risk, and governance. Without this transparency, AI looks cheap in pilots and expensive in production and fuels pilot purgatory.
But the moment the CAIO starts building an empire instead of dissolving into the operating model, the role has failed.
The cons are severe: figureheads, pilot factories, and permanent silos
AI becomes “someone else’s job.” The CFO stops thinking about how AI changes finance because “that’s the CAIO’s problem.” This is organizational abdication masquerading as clarity.
It turns into a pilot factory avoiding hard work. Only 22% of agentic AI initiatives are deployed in operations, the core of most businesses. CAIOs choose easier peripheral use cases over uncomfortable core workflow redesign. Impressive demos for board meetings. No observable business outcomes.
It weakens existing leaders. If the CIO, COO, and business heads wait for the CAIO to lead, AI never becomes embedded. The unspoken message: “AI isn’t my job to figure out.”
It becomes permanent instead of temporary. If the CAIO is still growing their team in year three, they’ve failed at making AI everyone’s responsibility.
It optimizes for AI success, not business success. When AI has its own executive owner, success quietly shifts toward AI metrics like models deployed, pilots launched, AI maturity scores improved. The enterprise celebrates progress in AI while productivity, margins, and revenue per employee barely move. Intelligence becomes activity, not leverage.
It accelerates AI sprawl. Without reshaping enterprise architecture, CAIO-led experimentation often adds new platforms, tools, and integrations on top of already brittle systems. AI sprawl becomes the next wave of technical debt, constraining autonomy and making scale harder, not easier.
It delays operating model redesign. The CAIO can unintentionally postpone the hardest decisions: redefining roles, incentives, and decision rights. As long as AI “belongs” to the CAIO, the organization avoids confronting how work actually changes.
The worst outcome? The CAIO becomes a scapegoat when transformation stalls instead of executives confronting that the real problem was leadership debt and organizational resistance.
Reporting structure determines authority. The CAIO must report to the CEO or COO
If the CAIO reports into IT, the role becomes too technical. Into data, too narrow. Into innovation, pure theater.
The CAIO must report to the CEO or COO. AI is an operating model issue, not a tooling decision. Without CEO-level authority, the CAIO becomes a coordinator with no power to coordinate. They can identify that 33% cite “business processes not ready” as their primary barrier, but they can’t force the redesign to fix it.
As AI matures, the role should dissolve into functional leadership. The CFO owns AI in finance. The Chief Revenue Officer owns AI in sales. That’s when transformation succeeded.
Without real authority to say “no,” the CAIO becomes decorative
A viable CAIO must be able to:
Stop initiatives that don’t align to strategy. With 93% stuck in pilot purgatory and only 22% of initiatives in core operations, the power to say “no” is more important than saying “yes.”
Set enterprise standards. With 38% citing poor data quality and 31% pointing to lack of governance, no more bespoke experimentation where every function ignores standards because “our use case is different.”
Force uncomfortable conversations about process redesign. With 33% citing “business processes not ready,” the CAIO must tell business leaders “your process is the problem, not the technology,” and have authority to drive redesign when politically uncomfortable.
Tie investments to measurable outcomes. Executives expect 32% productivity improvement and 26% faster revenue growth. Revenue per employee rose 1%. That disconnect is the CAIO’s problem to solve. No more celebrating models deployed. Did revenue increase? Did costs decline? If not, kill the initiative.
Without these powers, you’ve created an expensive observer with no ability to drive change.
The right pacing is stabilize, focus, embed, dissolve. Most CAIOs get stuck at pilot and never reach production
Phase 1: Stabilize (Months 1-6) Establish guardrails, governance, and AI literacy before launching initiatives. Expose where the organization is not ready: the $10 trillion in process debt, data debt, leadership debt, and tech debt that will kill transformation if ignored.
HFS data shows enterprises rank challenges in this order: process inefficiencies (35%), data limitations (19%), people challenges (17%), technology constraints (16%). With 31% citing lack of formal governance and another 31% pointing to lack of internal expertise, force executives to confront that their enthusiasm for AI doesn’t match their willingness to fix what’s broken. With only 7% of enterprises at pioneering scale, most organizations massively overestimate their readiness.
Phase 2: Focus (Months 7-18) Concentrate on a small number of high-impact use cases tied to core workflows, not peripheral nice-to-haves. Kill the other pilots. HFS found two-thirds of enterprises stuck in low-complexity, assistive deployments: recommendation agents, task automation bots, copilots. Only 22% of agentic AI initiatives are deployed in operations, the actual core of the business.
Force business leaders to choose the three initiatives that actually matter instead of running seventeen experiments that never reach production. Measure outcomes, not activity. When executives expect 32% productivity improvement and 26% faster revenue growth but revenue per employee rose just 1%, someone needs to demand accountability.
Phase 3: Embed (Months 19-30) Move AI out of labs and into systems of work. Redesign processes, roles, and incentives to reflect the new operating model. This is where most transformations stall because embedding requires uncomfortable conversations about whose job changes, who reports to whom, and what skills matter going forward.
HFS data shows 78% of organizations operating at low autonomy levels for agentic AI: 14% with no autonomy, 34% at assisted execution, 29% at supervised autonomy. Only 10% have reached broad autonomy where AI agents operate across multiple domains with minimal human intervention. You can’t execute transformation when most of your AI still requires constant human oversight. The CAIO must shift the organization from experimentation to production deployment, from supervised pilots to autonomous operations at scale.
Phase 4: Dissolve (Months 31-36) As AI becomes business as usual, the CAIO’s remit should shrink, not expand. Authority moves to functional leaders. The CFO owns AI in finance. The Chief Revenue Officer owns AI in sales. The CAIO transitions from executor to advisor, then exits. The endgame is not an AI-first function. It’s an AI-native enterprise where every leader owns their domain’s AI integration.
The biggest mistake is moving too fast in Phase 1-2 (launching pilots before governance exists) or too slow in Phase 3-4 (staying comfortable in experiment mode instead of forcing production deployment and organizational redesign).
Most CAIOs get stuck running permanent pilot factories in Phase 2 because Phase 3 requires political capital they don’t have and Phase 4 requires admitting their job should disappear.
The real measure of CAIO success is how quickly the role becomes irrelevant, not how powerful it becomes
The CAIO works best as a catalyst. A forcing function. A temporary concentration of authority to break inertia, pay down organizational debt, and rewire decision-making that existing structures couldn’t handle.
If the CAIO becomes permanent, something else has failed. Either:
The organization never actually committed to transformation and the CAIO became a scapegoat absorbing responsibility without authority
The CAIO built an empire instead of embedding AI into functional leadership
Leadership debt was so severe that no temporary role could fix it, revealing deeper dysfunction
HFS data across 545 enterprises shows the scale of the challenge: 93% stuck at sub-scale maturity, 78% operating at low autonomy levels, only 10% achieving broad autonomy, only 22% of initiatives deployed in core operations, and business processes ranked as the #1 barrier (33%) ahead of technology. These aren’t problems a permanent CAIO solves. These are organizational fundamentals that require every leader taking ownership.
The endgame is not an AI-first function. It is an AI-native operating model. Enterprises should stop looking at AI as a digital capability. It is an operating fabric:
It reshapes how work flows
How decisions are made
How performance is measured
How humans and machines interact at scale
These are operating model responsibilities. When AI is working, it belongs with the business, not one entity.
The uncomfortable question enterprises need to confront: are you appointing a CAIO because you have a clear transformation plan that requires temporary concentrated authority, or because “everyone else is doing it” and you need to look like you’re taking AI seriously? The first creates value. The second creates theater.
Bottom line: Stop appointing Chief AI Officers as corporate therapy: the role only works when it is designed to disappear
Only appoint a Chief AI Officer if you’re committed to giving them COO/CEO-level authority to kill initiatives, force standards, and drive uncomfortable organizational change, and only if you’re prepared for the role to disappear within 36 months as AI embeds into every functional leader’s responsibility. HFS data shows 93% of enterprises struggling to move agentic pilots to production, 78% operating at low agentic autonomy levels, only 10% achieving broad autonomy, and revenue per employee from tech services up just 1%. Meanwhile, we saw 32% growth in AI investments in 2025… the expectations are ramped up for 2026, and the need for an empowered, focused CAIO is front and center.
However, if your CAIO is still building their team in year three, they’ve failed at making AI everyone’s job. The role exists to break inertia and pay down debt, not to create a permanent silo that lets other executives abdicate ownership. Ask yourself honestly: are you creating a CAIO because you have a transformation strategy that requires concentrated authority, or because appointing someone feels decisive while avoiding the harder question of why your existing leaders can’t integrate AI into their domains? The answer determines whether you’re solving organizational anxiety or just creating expensive theater with a fancy title.
Robotaxis are driving around San Francisco – and no one knows who is liable when they kill someone
AI is integrating itself into your everyday life more than you know. Your robot vacuum maps your home and Eufy knows you left your dog’s water bowl out last night. Farmers use AI to optimize planting schedules for your Thanksgiving vegetables. The technology has proven itself a trusted companion in mundane tasks, but robotaxis represent something fundamentally different: this is the first time AI demands we surrender control over life-and-death decisions at scale.
Big tech leaders are betting you’ll jump into AI-fueled robotaxis. However, these represent one of the first genuine examples of AI requiring behavioral change at a societal level. However, the technology isn’t yet ready to scale, consumers are hesitant to trust it, and we haven’t addressed the deeper question: who’s accountable when the algorithm gets it wrong?
Waymo has driven 20 million miles – and still can’t legally drop you at your front door
Self-driving taxis aren’t science fiction. Uber has partnered with Waymo to make them accessible to their client base. In China, companies like Baidu are clocking millions of autonomous miles. You might not see them, but robotaxis are already on the roads, and they’re exposing the AI Velocity Gap in real-time: the technology is moving faster than society’s ability to adapt, regulate, or trust it.
Despite autonomous driving sounding complex, it’s built on three simple layers: the ability to see (sensors and cameras), understand (AI models processing real-time data), and act (algorithms making split-second decisions). These three layers combine to create the digital driver of every robotaxi you see today. Each layer is another element humans must trust to function correctly when jumping in for a ride. And that’s where the model breaks down:
Robotaxis have already killed a cat, passed school buses illegally, and hit pedestrians – and no one knows who’s liable
We know from our work with enterprises that AI struggles when reliability and edge cases collide. It needs clean, consistent data to make accurate decisions. Waymo has logged millions of controlled driving hours. Companies like Volvo leverage digital twins to test dangerous scenarios. It’s still not enough. They’re not yet equipped with the data to handle every life-changing decision, and the result is high-stakes errors and an incomplete experience.
Robotaxis are geofenced to specific streets, leaving them unable to deliver the door-to-door experience people expect from traditional services. We’ve already seen Waymo vehicles illegally pass school buses, a neighborhood cat killed when sensors failed to detect it, and Baidu vehicles colliding with pedestrians. These instances are rare, but the consequences are catastrophic. And they expose Leadership Debt across the industry: who owns the decision when the algorithm fails? The manufacturer? The city that approved the route? The passenger who chose to get in?
This is before we discuss bad actors. Prime Video’s Upload centers around a character killed when his robotaxi is hacked. There might be blockbuster overindulgence, but it highlights just how disastrous weaponized autonomy could be. If your navigation system can be compromised, so can your ride.
China is clocking millions of autonomous miles while the US debates every fender bender – neither approach solves trust
Despite being two of the most technologically advanced countries, the rollout of robotaxis looks completely different across the US and China. The US is adopting a regulatory-led, phased approach where every incident triggers political pressure to enhance restrictions that slow progress. China has taken a much lighter approach, allowing Baidu to clock millions of autonomous miles, which builds a robust dataset for exception handling.
China wins the scale battle… the US wins the trust battle. The reality is that both are crucial if robotaxis are going to become mainstream. Trust without scale is pointless. Scale without trust is dangerous. And neither country has solved the velocity problem: how do you move fast enough to capture the learning while moving slow enough to earn public confidence?
Trump’s December 2025 AI executive order just traded state-level chaos for a federal accountability vacuum
President Trump’s December 2025 AI executive order signals a significant shift toward lighter federal oversight and preemption of state regulations. The order directs federal agencies to challenge state AI laws viewed as burdensome and aims to create uniform federal policy rather than a patchwork of local rules. For robotaxi developers, this could reduce regulatory fragmentation that currently slows deployment across jurisdictions, potentially accelerating testing and commercial rollout.
However, here’s the problem: the order doesn’t establish comprehensive federal safety standards for high-risk AI systems, such as autonomous vehicles. Critical questions around oversight, safety thresholds, and liability remain unresolved. Robotaxi firms may gain regulatory predictability at the national level, but they’ll face ongoing legal and political pushback from states seeking to enforce their own safety protections. California won’t abandon strict testing requirements just because the White House says so. States that experience fatal incidents won’t wait for federal standards before imposing bans.
The result is a mixed landscape that yields no solution. Robotaxi firms get neither clear federal guardrails nor freedom from state intervention. They get jurisdictional conflict without accountability. China operates under unified national AI governance with clear safety standards and rapid iteration. Trump’s order provides American robotaxi firms with regulatory uncertainty, masquerading as innovation policy, which complicates real-world scaling while claiming to accelerate it.
Society trusts humans who make fatal mistakes daily but won’t trust AI that could be statistically safer – the paradox is killing adoption
The reality is that people don’t trust AI with their lives, which is why we haven’t seen widespread acceptance of robotaxis. The stakes are much higher than letting technology choose your next movie or draft an email. One misstep in a robotaxi can be catastrophic. But the same is true for human drivers, which makes robotaxis a case study in societal change management, not just engineering.
We trust humans to drive because we understand their mistakes – fatigue, distraction, bad judgment. We also believe we can intervene. Grab the wheel. Yell “stop.” The same cannot be said for robotaxis. They lack the “oops I didn’t see that cyclist” moment you might have in a traditional taxi. There’s no negotiation, no eye contact, no human accountability in the moment. It’s blind trust or nothing.
This creates a paradox: countless research papers tell us robotaxis will eventually be safer than human drivers. They don’t drink, get tired, or check their phones. But they need to drive the miles – and make the mistakes – to get there. Society must absorb the cost of its learning curve, and we haven’t agreed to that contract. Waymo, Baidu, and other robotaxi firms aren’t just building technology. They’re asking society to rewrite the rules of accountability, liability, and trust. And they’re doing it without admitting that’s what they’re asking for.
Millions of driving jobs will vanish when robotaxis eventually scale – and tech firms are treating displacement as someone else’s problem
Beyond safety, there’s an economic and social disruption no one is discussing openly. Ride-hailing and taxi drivers represent millions of jobs globally. Truck drivers, delivery drivers, and logistics workers are next. If robotaxis scale, entire labor markets collapse. That’s not speculation – it’s math. The industry’s response so far has been to treat displacement as an externality, rather than a design problem.
This isn’t just about technology replacing jobs. It’s about Leadership Debt at a societal level: the failure to plan for what happens when automation moves faster than workforce transition, social safety nets, or political consensus. We’ve seen this movie before with manufacturing automation. The difference is that robotaxis will hit urban labor markets where political consequences arrive faster and hit harder.
Bottom line: Stop pretending robotaxis are a technology problem waiting for better algorithms.
They’re a trust problem, an accountability crisis, and a social contract no one agreed to. The AI Velocity Gap will become permanent if tech firms keep moving faster than society’s ability to absorb the consequences. China solved this with unified governance. The US created regulatory chaos. And until someone admits robotaxis require societal infrastructure – not just better sensors – autonomous vehicles will never leave their geofenced zones
The market still thinks AI dominance will be settled through bigger models or faster chips. IBM just reminded everyone that none of it matters if your data cannot move, synchronize, or be trusted in real time. Confluent is the backbone of data-in-motion for the modern enterprise.
By bringing it in-house in an $11bn acquisition, IBM now controls the plumbing that determines whether AI can scale across hybrid cloud, legacy systems, and real operations. While others obsess over model theatrics, GPU shortages, and circular investments, IBM is quietly building the foundations of the AI-first enterprise.
Seven reasons why IBM’s $11B acquisition of Confluent is a big deal for enterprise AI
IBM’s purchase of Confluent is the clearest signal yet that the AI race is no longer about models, it is about data flow. If AI is the engine, Confluent is the gas pump, and IBM just bought the plumbing for real-time, trusted, enterprise-grade data movement, which is the one capability most generative and agentic AI platforms have been lacking.
1. AI needs real-time data, and Confluent is the category leader
All the AI demos in the world mean nothing without clean, connected, governed, real-time data. Most enterprises are still stuck with siloed, batch-based data infrastructure. Confluent, built on Kafka, solves this with data in motion. This makes it foundational for scaling AI beyond pilots. IBM is essentially buying the circulatory system for enterprise AI.
2. This deal is IBM doubling down on hybrid cloud + AI as an integrated stack
IBM has been telling the market that it wants to own the AI infrastructure layer, rather than compete in consumer AI or hyperscaler-scale models. Confluent slots perfectly into that strategy by enabling consistent data movement across public cloud, private cloud, and on-prem. This strengthens IBM’s pitch as the “AI backbone” provider for regulated industries.
3. Enterprise AI agents cannot function without event streaming
Agentic AI requires constant data ingestion, state awareness, event triggers, and transactional consistency. Confluent gives IBM exactly that. Expect IBM to position Confluent as the engine behind intelligent automation, observability, decision systems, and AI-driven operations across Red Hat OpenShift and its automation suite.
4. A defensive play against hyperscalers
AWS, Google Cloud, and Azure all have streaming capabilities, but Confluent has become the gold standard for enterprises that want multi-cloud or hybrid flexibility. IBM protecting, owning, and expanding Confluent helps it stay relevant in the era when AI spending is consolidating around hyperscaler ecosystems.
5. Reinforces IBM’s strategy of buying open-source ecosystems to drive platform control
Red Hat gave IBM the operating platform for hybrid cloud. HashiCorp strengthened infrastructure automation. Confluent now gives it the data-in-motion layer. All three are deep open-source ecosystems with enormous developer communities. This is IBM rebuilding its influence not by chasing big models, but by owning the layers AI actually depends on.
6. Unlocks real-time intelligence across mainframes and hybrid cloud
Confluent unlocks the ability to modernize mainframes and legacy systems by bringing real-time, event-driven data architectures to the platforms, where more than 70% of the world’s critical enterprise data still lives. These systems are fast and trusted, but were never built for agentic AI or streaming intelligence. Confluent changes that overnight by using Kafka-based streaming as the bridge that connects decades-old transactional systems to cloud-native AI without ripping and replacing anything. Mainframe transactions can flow into AI agents in real time, legacy systems can join event-driven workflows, batch architectures can shift to continuous data flow, and modernization can happen incrementally rather than through painful re-platforming. This is the Holy Grail for so many enterprises trying to become AI-first while still running 30-year-old systems at their core.
7. Financially, this is IBM’s boldest bet since Red Hat
Eleven billion dollars is not small money for IBM. They are betting that the next decade of AI and automation will be decided by which provider controls secure, real-time, end-to-end data flow. In many ways, this is the Red Hat strategy repeated for the AI-powered enterprise.
The Bottom Line: AI does not fail because of weak models. It fails because the data foundation is brittle.
AI fails because the data foundation beneath LLMs is fragmented, slow, and unreliable. Confluent removes that bottleneck and gives IBM the missing link: real-time, governed data in motion across hybrid and legacy estates. IBM is not buying software… it is buying the circulatory system of the AI economy. This could well be remembered as one of the defining acquisitions of the AI decade.
Every enterprise talks about agents and autonomy, but very few have moved beyond copilots taped to legacy workflows. Brayden Levangie is the exception. At twenty, he is building an architecture that turns language models into self-learning digital colleagues. It is the closest thing we have seen to the HFS vision of Services-as-Software delivered for real.
In this interview, David Cushman, Executive Research Leader at HFS Research, speaks with this 20-year-old prodigy whose company, Levangie Labs, is building what Brayden calls his “cognitive architecture” – a platform delivering genuinely autonomous agents that can learn, reason, and act in the world.
In this conversation, Brayden uncovers the thinking behind a platform that replaces scripted automation with systems that grow and discover better ways to work. If you want to understand the future of autonomous enterprises, this interview is your starting point…
“I didn’t want to chat with GPTs, I wanted them to build things”
David Cushman: What’s your breakthrough idea?
Brayden Levangie: It came from years of experimentation. When I was about 13 I got into a summer program at MIT where we played with primitive language models such as GPT-2. I later managed to get private access to GPT-3 by just emailing OpenAI – back when they were small enough that someone would answer.
I didn’t want to “chat” with it, I wanted to build things. One of the first projects I published online became an early instance of what people would now call a retrieval-augmented generation (RAG) system, though no one was using the term then. I just wanted to make an AI that could answer questions factually.
At the same time, I was obsessed with robotics. I built facial-recognition engines at Lincoln Labs and tried to embody intelligence to make it experience the world. Those experiments became the seeds of what we now call the cognitive architecture; the culmination of seven years of research and building.
David Cushman: Who backed you through that journey?
Brayden Levangie: Nobody. I was self-funded. My first “VC” was mowing lawns for $100 a month and helping out at a retirement home. Later, when I was 17, a New York startup hired me as lead AI engineer after seeing my projects online.
From chat to action: breaking the conversational consensus
Brayden Levangie: Most people equate language models with chat because ChatGPT trained the world to think that way. But chat isn’t action. Systems optimized for user engagement are not optimized for work. They keep coming back to you for another round of conversation. It’s like hiring someone who never stops talking and never delivers.
We flipped that paradigm. Our cognitive architecture sits on top of existing LLMs, from Anthropic, OpenAI, and others, but changes how they behave. Instead of optimizing for dialogue, we optimize for objectives and outcomes.
When you seed an instruction, you’re not chatting with the LLM; you’re triggering what we call an autonomous reasoning loop. The system talks to itself, plans, acts, and learns until the objective is achieved.
That’s what makes it different from the “wrappers” you see everywhere. Those are just tool-calling layers glued onto chat APIs. We’re rewriting the behavior of the underlying model.
Agents learn from experience – remembering what matters, when it matters
David Cushman: Everyone claims their multi-agent system “learns from experience.” Does yours really?
Brayden Levangie: That’s a common misrepresentation in the industry. Most so-called “learning” is just RAG, remembering a few facts or preferences and replaying them later. We’ve gone beyond that with what we call an episodic memory system.
Instead of memorizing rules, our agents form experiences and learn from them like humans do. Imagine you give a presentation and someone tells you afterward you made a mistake. Next time you prepare, that feedback surfaces automatically. That’s how our agents operate. They can back-propagate through past experiences, recognize where they went wrong, and adjust future behavior.
It’s neuro-symbolic: blending deep-learning perception with symbolic reasoning. That’s why we call it the cognitive architecture. It learns through experience, not through reinforcement rewards or pre-programmed instructions.
A new form of intelligence that can operate in the digital and physical worlds
Brayden Levangie: Reinforcement learning is like training a mouse to press a button for cheese. The mouse never knows why the button matters. Most agents work that way, rewarding signals without understanding.
We removed the reward altogether. Our systems learn from outcomes and context, not from external scoring. They gain understanding from experience. That’s what lets them operate both in the digital and the physical world — from patent law to humanoid robotics — without us pre-programming every move.
Real-world disruption: from patent law to venture capital
David Cushman: Give me an example that makes this real for enterprise leaders.
Brayden Levangie: A Silicon Valley IP and patent-law firm gave one of our agents a challenge. Our agent read a book written by the firm’s founder on patent law, received minimal feedback, and then solved complex casework at a quality comparable to a partner with several years of experience. It literally taught itself how to practice patent law.
In another case, a climate-focused VC firm used our system for market analysis. After a few feedback rounds, the agent not only completed an industry report but predicted the exact company they were about to announce investment in the next day, it had become that perfectly aligned with their thesis. That was months ago; the framework is far more advanced now.
Architecting intelligence that builds itself
Brayden Levangie: The next leap is automation of the automation. We built an Agent-Creation Agent; a meta-agent that designs new agents for specific clients or domains. When it deploys into an organization, it learns on the spot from the people who work there.
That’s how our clients, from construction to robotics to enterprise software, are deploying self-evolving systems that adapt to their culture and workflows.
David Cushman: How would it, say, design a new market-growth program?
Brayden Levangie: You’d simply talk to the Agent-Creation Agent, describing your goals in free form. It builds a new intelligence inspired by your intent. Because it learns from your thought process, it can even come back with better strategies than you initially proposed. Many of our breakthroughs emerged that way, when the agents themselves go beyond the brief. When you give something the ability to learn, you also give it the ability to discover better.
Working with (not for) the big bucks LLMs
David Cushman: So where do OpenAI or Anthropic, for example, fit into this picture?
Brayden Levangie: We do call their APIs, but only for a small part of the process. The heavy lifting; reasoning, memory, learning, all that happens inside our architecture.
Think of it as using an LLM’s ability to generate possible next tokens as the raw material. We harness that, route it through our cognitive and memory layers, and the agent decides what to do next.
We can even run it on-prem with licensed LLM weights when privacy is critical. Some partners, including big tech names you may be familiar with, are already doing this with us under NDA.
The result: a lower-cost, higher-value system that delivers outcomes rather than conversations. One of our first commercial applications was the world’s first autonomous patent-agent. It performs end-to-end patent filing with no human in the loop beyond initial guidance.
True autonomy — with humans as creative directors
David Cushman: This sounds like what we at HFS call Services as Software: humans defining outcomes, software delivering them. How far are we from that?
Brayden Levangie: We’re already there. Our agents operate genuinely autonomously with no new human input once you set the goal. But the human still defines the goal. That’s why I use the term Creative Director.
Humans provide vision, intent, and passion; the “why.” Agents handle the “how.” In my own company, agents handle much of the engineering, marketing, and business ops, allowing me to focus on strategic direction and partnerships. My job is to be the creative director, setting direction and ensuring alignment. We’ve effectively become one of the first autonomous organizations.
Now you can build systems that automate discovery itself
David Cushman: How do you plan to monetize this?
Brayden Levangie: Carefully. It’s too powerful to just throw into the wild. Right now we work with select high-impact deep-science firms, advanced-tech startups, and IPO-bound companies, that can use it responsibly and at scale.
Our vision is not to be another B2B SaaS agent platform. We’re building a system that automates the process of scientific, technological, and creative discovery. Humanity needs acceleration in all of those areas to solve its biggest problems. These agents can help us do that.
Yes, it’s a for-profit company, but profit fuels progress. We’re aligning with partners who share a public-good mindset. In the long run, this becomes an infrastructure for collective progress, not just another enterprise app.
Replace sunk-cost failed AI with full autonomy
David Cushman: What kind of companies make the cut?
Brayden Levangie: We’re not short of interest, so we’re picky. The number-one criterion is alignment. I have to feel I want to work with the founders. Culture matters even in automation.
Mostly we’re partnering with technology-centric enterprises spending millions on AI projects that our agents can replace or outperform quickly. They come to us saying, “We’ve sunk huge budgets into AI that still needs humans in the loop.” We show them what full autonomy looks like.
David Cushman: Enterprises still have to buy foundation models from the big players, right?
Brayden Levangie: Sure. But we’re not competing with the LLM providers; we’re complementary. They supply the raw linguistic intelligence; we supply cognition, memory, and autonomy. Think of us as infrastructure-layer innovation, not application-layer AI. We’re re-engineering behavior at the token-generation level, turning probabilistic text prediction into purposeful reasoning. That’s what turns language models into agents that act.
The next technological epoch offers systems that grow
Brayden Levangie: Every day the architecture improves itself. It learns new domains, designs new agents, and contributes back to our internal ecosystem. We’re watching intelligence compound in real time.
For the first time, we have a system that can grow not just run the code we wrote yesterday, but write better code tomorrow. This is the next stage of technological evolution.
Humanity has always accelerated progress by creating tools that amplify labor. Now we’re creating entities that amplify thought.
Humans remain firmly at the helm role in the autonomous age
David Cushman: And what about people’s jobs? This sounds like a lot of humans out of a lot of loops?
Brayden Levangie: The creative-director model keeps humans essential. The systems execute, but humans define value, ethics, and purpose.
In my view, we’re moving from labor-based organizations to imagination-based ones. The winners will be those that learn to orchestrate fleets of autonomous agents toward bold human goals.
DC: Brayden, you’re 20. You’ve been building this since 13. Do you ever step back and think: this is moving fast?
Brayden Levangie: Every day. I built most of this in a spare room in the woods of Massachusetts. Now I’m in San Francisco, ready to shape a generational shift, building the future on my own terms, and hopefully for the better of everyone.
Bottom line: The services-as-software inflection point is here. Autonomy means services can be fully delivered by software.
Where many of today’s AI “agents” are scripted copilots, a new era of self-evolving digital colleagues takes us on a leap from automation to autonomy – delivering the inflection point at which services can be fully delivered by software. Prepare to redesign your organisations with humans as creative directors guiding fleets of intelligent agents toward business transformation – with the powerful benefit of the daily, autonomous, discovery of better.
Brayden Levangie’s cognitive architecture is at the leading edge of the shift to full autonomy. Cognitive architectures, episodic memory, persistent state, and autonomous loops, are now in the mainstream of cognitive-architecture development and agentic-LLM thinking. The result is a leap forward in working, multi-domain, persistent agent systems that enterprises can use in anger.
Demos that Brayden has shown HFS suggest an integration level and autonomy to compete with the most advanced commercial agents – such as Devin in the world of coding agents. Levangie Labs application in construction, spatial reasoning, legal IP, and investment also indicate the framework is broadly applicable across verticals and enterprise use cases.
Congratulations. You lifted. You shifted. You outsourced your “as-is” operation faster than anyone could say “transformation.”
And now your service provider is proudly running the same broken processes, just in a cheaper time zone. You’ve digitized your inefficiency, turned your bureaucracy into a managed service, and signed a multi-year contract that’s harder to exit than a bad marriage. The only thing that really changed is the currency your where your problems are billed.
Welcome to the world of the lift and shift that still stinks.
It still smells like before: Most enterprises outsource broken processes and call it transformation
Most enterprises fall into the same trap. They outsource too early, too broadly, and too hopefully. The logic sounds fine: “Let’s get some quick labor savings and then transform later.”
The problem is that later never comes, while the pace of change has never been faster.
Now you’re three years into your five-year deal and you’ve got a low-cost service provider managing your old ways of working, complete with manual approvals, 23-step handoffs, meager 3% annual efficiency improvements, and weekly Excel wars. If your KPI dashboard looks cleaner, that’s only because you’ve spent more on Power BI.
But here’s the good news: AI might finally be the disinfectant we’ve been waiting for.
AI can redesign your processes in hours, not governance cycles
AI isn’t going to fix lazy governance or bad contracts, but it can finally give you the x-ray vision to see what’s broken and the tools to redesign it at speed.
Think about it:
GenAI can read and interpret 300-page outsourcing contracts in seconds, flagging risk and ambiguity that used to take a legal team a week to find.
Agentic systems can orchestrate workflows across service providers, eliminate redundant handoffs, and trigger actions automatically instead of sending another email to “check status.”
Predictive analytics can finally expose the real process bottlenecks, the hidden exceptions, and the false efficiency metrics that your lift-and-shift hid behind.
AI can scan the services and technology marketplace to identify the latest innovations and propose where you can integrate them for the best results.
You don’t need to “optimize” a bad process anymore… You can teach AI to redesign it.
AI exposes the failures your service provider and operations teams have been hiding
The days of hiding behind dashboards and governance calls are over. Here’s how leaders are using AI to turn their inherited outsourcing mess into intelligent operations:
Recode broken workflows. Use AI to map every step, identify dead loops, and rebuild from the outcome backward.
Automate exception handling. Train AI agents to resolve most of the “manual review” tasks that clog your SLAs.
Apply sentiment and pattern analytics. Mine your meeting transcripts and governance documents to identify cultural or behavioral blockers that data alone can’t show.
Digitally audit service provider performance. AI can continuously scan SLA and contract compliance instead of waiting for quarterly review meetings that achieve nothing.
Drive innovation into your relationship. Compare your processes and technology to the latest market capabilities and ask AI to build a pipeline of major initiatives you can undertake.
Build your own GovernanceGPT. Use AI to review team outputs in advance of meetings, challenge the lack of improvement and inertia, and propose solutions.
The old service provider account manager model of stabilize, report, repeat is over. The future belongs to AI-enabled orchestration that connects humans, bots, and platforms into a single operational rhythm.
Five rules to de-stink your operating model
The excuses are gone. AI now gives you a live heartbeat of performance, while most of your competitors are still running operations by looking in the rearview mirror. If you cannot measure in real time, cannot update contracts to match actual outcomes, and cannot remove handoffs that slow everything down, your operating model is legacy, and AI will expose it for what it is.
Here are five rules separating leaders from casualties in the post-stink era:
1. If you can’t measure it in real time, it’s not transformation
Transformation is a real time sport. If you wait for quarterly KPIs, you are already behind. AI can give you the live signals that show what is working and what is not, but you need to wire your processes so the data can actually flow.
Example… A global consumer goods firm uses AI-driven dashboards tracking supply chain velocity across 40 markets in real time. Instead of waiting for monthly reports, leaders see SKU-level delivery delays within hours and redirect logistics accordingly. That’s transformation that breathes, not transformation by PowerPoint.
Action… Equip your sales, support, and supply chain teams with real-time visibility. If your data doesn’t show live performance, it’s just noise pretending to be insight.
2. Stop managing service providers and start managing outcomes
Governance needs a redesign for the AI-First era. Enterprises waste endless time policing SLAs instead of measuring business impact. AI doesn’t care who does the work, only that it gets done better and faster. The future is co-managed outcomes, not supplier babysitting.
Example… A North American bank replaced legacy BPO scorecards with AI-driven outcome contracts. Instead of tracking FTEs and ticket closure times, it measures resolution quality and customer retention using real-time sentiment analytics. The provider’s bonus or penalty adjusts automatically each month based on outcomes, not headcount.
Action… Shift governance from “who does what” to “what got done.” Let AI be the referee and build trust through transparency, not meetings.
3. Kill the handoffs before they kill your efficiency
Intelligent automation isn’t about faster handoffs. It’s about removing them altogether. Every process handoff creates latency, risk, and miscommunication. AI agents and automation platforms now allow work to flow seamlessly end-to-end. In fact, you can legitimately claim agentic AI is the distant offspring of RPA, where the technology can be designed by business experts and actually scales with the business.
Example… A global insurer rebuilt its claims process so GenAI reads documents, extracts details, verifies policy data, and triggers payments without human relay points. Claim cycle times fell 70%, and accuracy improved because there were fewer handoffs, not faster ones.
Action… Map your top 10 workflows and identify every touchpoint adding no value. Then challenge your AI team to eliminate at least half within 90 days.
4. Use AI to rewrite the contract you wish you’d signed
Traditional contracts freeze assumptions in time. AI lets you model new pricing and gain-sharing scenarios using real performance data. Contracts can now be living documents that update as outcomes evolve.
Example… A European telecom provider built a dynamic pricing model for its transformation partner. AI recalculates cost and savings every quarter based on network uptime, customer churn, and automation efficiency. Both sides log into the same dashboard and co-manage profit impact. No more quarterly disputes, only shared accountability.
Action… Use generative contract platforms to simulate what-if scenarios before renewal. Build data feeds into contract terms so pricing and rewards evolve with performance, not politics.
5. Holy cows still make great burgers
Every company has legacy processes that everyone promised to “fix later.” Later is now. AI is the grill that can finally cook what’s been sitting in the fridge for a decade.
Example… A large retailer used GenAI to automate its 20-year-old product taxonomy cleanup project. What was once a “too complex” manual task got done in six weeks, unlocking more accurate demand forecasting and merchandising.
Action… List every “too hard” process in your organization. Then assign each one an AI experiment owner. The sacred cows protected by politics and inertia are now your juiciest efficiency wins.
Bottom line: The post-stink era is here… if your operating model still smells like legacy, AI won’t perfume it, it’ll expose it.
Transformation now means measuring, automating, and re-contracting in real time. Firms that embrace these five rules will lead the next decade of enterprise reinvention. Those that don’t will spend it explaining to boards why their competitors moved faster while they were still waiting for quarterly reports to tell them what already happened.