Stop treating FDE as optional: Your AI Flywheel will not spin without it

| ,

Enterprise technology leaders are drowning in AI commentary. LLMs. Agents. Vibe coding. The analyst decks keep coming. But the hard question nobody is answering is this: who actually wires AI into your live systems, governs it in production, and makes it keep working when the AI software vendors leave the room? The answer is Forward Deployed Engineering (FDE). If your transformation strategy does not have it, you are building an AI theater, not an AI operating model.

93% of enterprises are stuck in AI pilot purgatory. The missing layer is not better models or bigger budgets. It is Forward Deployed Engineering, and the firms that crack it at scale will own the recurring revenue layer of enterprise AI.

The Services-as-Software Flywheel brings together the AI technologies to steer firms into the AI era

The HFS Services-as-Software Flywheel has 4 accelerants: LLMs accelerate reasoning and code generation, agentic AI that orchestrates decisions across systems, vibe coding that turns business intent into working service agents, and Forward Deployed Engineers (FDEs) activate AI into real enterprise environments. The result is a compounding system where intent becomes production workflows, workflows generate data, and that data improves the next generation of agents.

Click to Enlarge

The missing insight in many AI strategies is that velocity alone does not create enterprise value. The Services-as-Software flywheel requires an embedded execution layer that connects these technologies inside real operational systems. FDE forms that layer, ensuring the flywheel spins inside production environments rather than inside sandbox pilots. Here is what actually happens without FDE:

  • LLMs summarize PDFs in sandboxed demos, disconnected from governed enterprise data.
  • Agents sit in pilot mode indefinitely because nobody has designed the approval chains, audit trails, and escalation paths that regulated operations require.
  • Vibe coding generates experimental agents at the business unit level with no architectural coherence, creating fragmentation and compliance exposure.

The Flywheel does not spin because there is no embedded engineering force to connect the components inside real systems. That is the dirty secret of AI services. The gap is not technological. It is operational.

Services-as-Software does not eliminate services. It embeds them deeper into the software. FDE is the mechanism that makes that shift real.

Palantir cracked this a decade ago. The ecosystem forming around it is a preview of the emerging Services-as-Software market.

Palantir built its competitive advantage not on model superiority but on proximity to operational reality. Forward deployed engineers embedded inside client environments, wiring models into live data, real permissions, regulatory controls, and the messy ontologies that reflect how enterprises actually function. They did not sell transformation roadmaps. They shipped production workflows.

The market is increasingly recognizing this model. Palantir’s share price has increased roughly 10× in the past two years, reflecting investor belief that the future of enterprise AI lies not just in models, but in the ability to embed those models into operational systems:

That approach is now being industrialized through AIP Bootcamps: structured engagements that take a team from a scoped problem to a working production deployment in 1 to 5 days. Not a proof of concept in a sandbox. A live workflow with real data and real controls. That changes the entire commercial dynamic.

FDE is not implementation – it is the engineering layer that makes AI governable.

There is a persistent misunderstanding in the market. FDE is often conflated with systems integration or technical implementation. It is neither. FDE is the discipline that turns AI capabilities into durable enterprise mechanisms. The Palantir model makes this concrete: FDE teams build ontologies that reflect how the enterprise actually operates, wire models into real data with real permissions, and design the governance architecture that keeps autonomous systems accountable.

What LLMs cannot do on their own:

  • Connect themselves to governed enterprise data with appropriate permission structures.
  • Navigate the regulatory architecture of specific industries, from HIPAA to Basel III to GDPR.
  • Design and enforce human approval chains for decisions that carry legal or financial consequences.
  • Monitor for model drift, output degradation, or ontological inconsistency over time.
  • Maintain alignment between the AI layer and the evolving business logic it is meant to serve.

FDE teams own all of that. The cost of not having them is not a missed optimization. It is a compliance event, a reputational failure, or an AI system that quietly degrades until someone notices the outputs stopped making sense.

LLMs accelerate. FDE operationalizes. Without the second, the first is a liability, not an asset.

Agentic AI without FDE governance is not transformation. It is risk accumulation.

Agentic AI is the most significant shift in enterprise technology in a generation. Agents can trigger workflows, coordinate decisions across systems, execute multi-step logic, and enforce compliance rules in real time. But autonomous workflow proliferation without governance architecture is dangerous in regulated industries.

A financial services firm cannot allow agents to make credit decisions without explicit decision rights, immutable audit trails, escalation paths, and human override mechanisms. A healthcare system cannot let clinical workflow agents operate without continuous performance monitoring and documented accountability chains. This is not a chatbot problem. It is a systems engineering problem, and FDE is the only delivery model currently designed to solve it at enterprise scale.

  • Ontology design that reflects how the enterprise actually operates, not how a vendor template assumes it does.
  • Decision rights mapping documenting who and what can authorize each class of agent action.
  • Continuous performance monitoring that catches drift before it becomes a compliance failure.
  • Human-in-the-loop override architectures are designed for operational teams, not technical administrators.
  • Escalation path engineering that routes exceptions to the right humans at the right level of urgency.

Vibe Coding creates velocity. FDE prevents it from becoming chaos.

Vibe coding lowers the barrier to building service agents to near zero. Business analysts can express intent and receive working agent code in return. That is a structural change in enterprise operating capacity. It is also a fragmentation risk without an engineering discipline layer.

When every business unit spins up agents independently, you get redundant logic across siloed codebases, compliance exposure from agents built outside the governance perimeter, and an AI estate that is technically diverse but operationally unmanageable. The firms in the Palantir ecosystem, building reusable ontology libraries and control frameworks for specific verticals, are creating precisely the discipline layer that makes vibe coding sustainable. That is not a feature. It is a defensible competitive position with real switching costs attached.

  • Standard patterns that teams build within, not around.
  • Reusable ontologies that maintain consistency across business unit deployments.
  • Version control and change management frameworks designed for agent-based systems.
  • Guardrails that catch compliance and security issues before deployment, not after.

The Palantir AIP (Artificial Intelligence Platform) Bootcamp is the most important commercial innovation in enterprise AI services right now.

In a Services-as-Software market, the client is not buying a transformation roadmap. They are buying working outcomes: claims triage that runs autonomously, supply chains that self-correct in real time, and compliance systems that audit continuously.

The AIP Bootcamp proves this model is real: a structured engagement, one to five days, that lands a specific workflow in production with real data and real controls. Instead of selling a roadmap, you sell a working workflow, and the client sees production capability before committing to scale. That changes the entire conversation about what AI services should cost and how they should be structured.

The downstream commercial implications are structural:

  • Sales cycles compress because proof-in-production replaces proof-of-concept theater.
  • Pricing shifts from time-and-materials to outcome-based or platform-plus-run structures.
  • Margin structures change because expertise density replaces labor volume as the core economic driver.
  • Recurring revenue replaces project revenue because deployed workflows require continuous operation, monitoring, and evolution.

FDE-service providers are no longer selling hours. They are selling production systems that keep delivering outcomes. That distinction separates the AI platform builders from the AI plumbers.

The partner lineup is significant not just for who is in it, but for how it is splitting: strategy-to-execution consultancies on one side, industrial-scale integrators and operators on the other. That split is not accidental. It is the three-layer market structure forming in real time:

Click to Enlarge

The three-layer market is forming now and market position is not guaranteed.

The Palantir partner ecosystem is the clearest early map of the market structure that will define enterprise AI services through the next five years. Three durable layers are forming, and the window to establish defensible position is narrowing.

Layer A: Strategy and operating model redesign.

Bain, Deloitte, PwC, and KPMG will own the AI operating system transformation layer. They define how enterprises restructure around AI-enabled workflows, with Palantir and other platforms as execution substrates. Competitive differentiation is proximity to senior leadership and the organizational change capability built over decades.

Layer B: Build and integrate.

Accenture, Capgemini, Infosys, and Cognizant will compete on certified delivery capacity, vertical industry accelerators, and speed-to-production. The winners will build the largest libraries of reusable ontologies, workflow templates, and controls frameworks for specific verticals. Switching costs accumulate here, and margin density improves over time. Accenture’s preferred global partner positioning signals a land-and-scale economics model already pulling away from the field.

Layer C: Run and govern.

This is where Services-as-Software becomes genuinely recurring. Rackspace has made the most explicit move here, positioning governed managed operations as a production service with operational SLAs. As more workflows go live, demand for disciplined AI estate management becomes a standalone commercial category with high switching costs and defensible margin.

One critical dynamic cutting across all three layers: government and regulated industries will disproportionately drive spend. Palantir’s center of gravity remains in defence, intelligence, and regulated enterprise, and it is expanding. Partners with existing clearances, regulatory delivery experience, and government relationships have a structural advantage that pure commercial integrators will struggle to replicate quickly.

The ontology arms race has already started, and the winners will be obvious within 18 months.

Foundry’s ontology concept, modelling the enterprise as an interconnected operational system, is the stickiest element in the platform. Partners building deep, reusable ontologies for specific verticals are not just accelerating delivery. They are creating lock-in that travels with the client relationship and compounds with every additional use case deployed.

  • Deloitte is combining its own assets with Foundry and AIP to create solution factory economics with accelerated time-to-value.
  • Accenture is building certified talent at scale to establish the largest industrialized delivery capacity in the market.
  • Cognizant is targeting healthcare operations specifically through the TriZetto combination, creating vertical depth rather than horizontal breadth.
  • Rackspace is building the managed operations layer that everyone else will eventually need to hand off to a specialist.

The firms still assembling their Palantir partnership and staffing for generic Foundry delivery are already behind. Ontology depth, workflow libraries, and delivery track record cannot be purchased quickly. The advantage is compounding in favor of early movers.

As AI-assisted building accelerates, services differentiation moves further up-stack into domain architecture, accountability frameworks, and measurable outcome guarantees. Providers competing on implementation capacity will find the floor dropping under them.

The brutal arithmetic: expertise density wins, labor leverage loses.

Enterprise technology leaders evaluating their services relationships need to ask a direct question: is this firm’s growth model built on expertise density or labor leverage? The answer determines everything about value delivery in an AI-driven market.

Traditional IT services scaled revenue by scaling headcount. LLM acceleration and agentic automation are compressing the labor input required per outcome delivered. A provider whose economics depend on headcount growth faces a structural margin problem regardless of what their AI partnership announcements say.

FDE-style delivery inverts the model: smaller squads, higher context density, faster deployment, higher-value outcomes, and recurring run revenue from systems they operate. The Palantir partner firms moving fastest on this are growing their expertise density and workflow libraries, not their headcount. That is the Services-as-Software endgame.

You are not choosing between AI vendors. You are choosing between providers who can deploy AI into production and those who will keep you in the pilot phase indefinitely.

The Bottom Line: Stop treating FDE as optional, it is critical to activate your AI systems and capabiities

Every quarter your enterprise spends in pilot mode is a quarter your competitors are driving production AI advantages. Demand FDE-capable delivery from your services partners, and measure them on production deployments, not roadmap slides.

If a partner cannot show a working workflow in your live systems within 90 days, they are not your AI transformation partner. They are your most expensive source of false confidence. The Palantir partner ecosystem has already shown what production-first delivery looks like. There is no excuse left for settling for anything less.

Posted in : Agentic AI, Artificial Intelligence, Business Process Outsourcing (BPO), Forward Deployed Engineering, GCCs, GenAI, Generative Enterprise, IT Outsourcing / IT Services, LLMs, OneOffice, Vibe Coding

Comment0 ShareThis 142 Twitter 0 Facebook 0 Linkedin 0

The HFS AI Trust Curve: AI isn’t failing… leadership is

| ,

Every enterprise today is using some form of AI, but only one in five has embraced agentic AI to actually make decisions. This is not a technology problem, but a trust problem.

Recent research covering 545 enterprise decision makers across the Global 2000 reveals 78% give very little/no autonomy to agentic AI:

Click to Enlarge

The HFS AI Trust Curve (below) maps the four stages every enterprise CIO or Chief AI officer must traverse to get from “the model works” to “we act on what it tells us.” Understanding where you are on this curve and what is keeping you stuck is the most important AI question your leadership team is not asking.

The HFS AI Trust Curve: Four Stages, Most Enterprises Never Leave Stage 2

The HFS AI Trust Curve is not a maturity model in the traditional sense. It does not reward effort or intent. Instead, it rewards an organization that achieves an outcome in which AI can influence decisions. Each stage has a defining question, a failure pattern, and a KPI that reveals where trust actually stands:

Source: HFS Research (qualitative) analysis – Data modernization and AI Horizons Study

To put things into perspective, consider a mid-sized consumer goods company delivering a $3B personal care brand with operations across 15 markets. This company’s story, laid out along this trust curve, is almost universal.

Stage 1. Model Confidence: Can the AI model work?

A $3B personal care brand operating across 15 markets builds an AI-powered demand forecasting model. It hits 87% accuracy in back-testing, outperforming the legacy statistical model by 14 percentage points. The Chief Digital Officer declares victory and the AI program is officially launched.

This is Stage 1. The KPI is model accuracy, which is necessary but not sufficient. What looks like an AI strategy is still an engineering achievement. Business stakeholders are impressed, but not yet converted, and that gap is what drives everything that follows.

Stage 2. Data Credibility: Do we believe the inputs?

Three months in, the VP of Supply Chain notices the AI’s demand signal for a core SKU diverges sharply from the regional sales team’s planning deck. The data science team traces it to a mismatch in how “sell-in” versus “sell-out” is defined across systems. The regional sales director has been using a different data set for two years and considers his version the gold standard. Now there are two dashboards, two answers, and a model that is technically correct but organizationally contested. AI has inherited a problem humans created.

The Stage 2 KPI now becomes the reconciliation effort: the time spent resolving competing definitions and ownership disputes. For this consumer goods company, the data fight is a symptom of a governance failure that requires a conversation between the CFO, Chief Supply Chain Officer, and CDO. It has nothing to do with an ETL pipeline (structured data workflow). Enterprises that treat Stage 2 as an engineering problem are guaranteeing a ceiling on everything AI could achieve.

Stage 3. Behavioral Trust: Will people actually act on it?

The personal care brand resolves most of the data disputes, or at least calls a truce.. The model is redeployed. Regional planners are trained. And then, in the next planning cycle, something quietly damning happens. The planners pull the AI recommendation, note it, and then proceed to build their own bottom-up forecast in Excel, adjusting for “local market intuition” and “factors the model doesn’t understand.” The AI output is printed in the deck as Appendix B, but nobody references it in the meeting.

This is Stage 3. The danger zone. When AI becomes advisory only, trust has not crossed the curve. It has essentially stalled at the edges.

The override rate, i.e., the percentage of AI recommendations that are modified or ignored in final decisions, shoots up to 75%. Senior leadership interprets this as a change management problem, which it is most definitely not. It is a symptom of unresolved credibility gaps from Stage 2 and of a deeper structural reality: the planners are not rewarded for trusting the model. They are rewarded for hitting their numbers. If the model is wrong and they follow it, the accountability falls on them. That incentive structure essentially turns rational humans into override engines.

Stage 4 – Decision Reliance: Is AI allowed to influence outcomes?

Stage 4 looks different. In this scenario, the consumer goods brand’s new Chief Supply Chain Officer makes a conscious structural change. AI-generated demand signals become the baseline for all planning conversations. Planners must log overrides with documented rationale. Performance reviews are starting to include a metric on how well AI recommendations correlate with actual outcomes. And whether human adjustments added value (or subtracted it). Within two quarters, override rates drop to 30%.

The KPI here is time-to-trust, i.e., how quickly does an AI-generated insight translate into an actual decision? In Stage 4 enterprises, this number is tracked. In Stage 3, it is not even a concept yet.

The effectiveness of Stage 4 maturity is not that AI is always right. It is that the organization has accepted that AI creates value only when it is allowed to be wrong before it is right. This stage requires institutional courage that most enterprises have yet to find. The reality is that Enterprise accountability structures still punish the person who trusted a model that missed, while quietly ignoring the person who ignored a model that was right.

The four discussed KPIs across the four stages are your trust matrix

The four trust-curve KPIs, i.e., model accuracy, reconciliation effort, override rate, and time-to-trust, do not tell you how good your AI is. They tell you where trust is actually breaking down. Presented together, they form an honest picture of whether your enterprise is genuinely adopting AI to realize its full potential.

Most AI program dashboards obsessively report the first KPI and ignore the other three, creating a blind spot. Reconciliation effort and override rate are KPIs enterprises actively avoid measuring, because what they reveal is an uncomfortable truth about human shortcomings, including contested data ownership, unresolved governance failures, and business users who have quietly concluded the AI is not worth the risk of being wrong alongside it. In the consumer goods example, a single override rate measurement revealed a governance failure that two years of AI investment had papered over.

The plateau persists because of culture debt

Enterprises stall between Stages 2 and 3 not because the models are weak, but because the organization was designed for human-controlled decisioning. The capabilities that get you through Stage 1, experimentation and validation, are not the capabilities that move you into scaled, AI-driven execution. Technical teams can tune models. They cannot renegotiate data ownership with Finance. They cannot redesign incentives so planners trust machine-generated forecasts. They cannot build the institutional confidence required for leaders to stand behind an AI-informed decision that later proves imperfect.

The firms breaking through the curve are not doing so because they have superior algorithms. They are doing so because leadership has resolved the human questions: Who owns the data? Who owns the insight? Who owns the outcome? Until those answers are explicit, AI remains advisory theater.

The Bottom Line: Every day your AI sits in recommendation mode is a day your competitor is operationalizing theirs. That gap is culture debt, and it compounds faster than technical debt because it hides behind governance language and “risk management.”

Instrument your AI deployments. Measure override rates. Track how often outputs are second-guessed or manually reconciled. Surface where decision rights are being pulled back to humans by default. Then follow those signals upstream to the incentive misalignments and trust deficits they reveal.

Stage 4 is not unlocked by better prompts or bigger models. It is unlocked by organizational honesty. This is not a technology bottleneck, it is a leadership one.

Posted in : Agentic AI, Artificial Intelligence, Business Data Services, Data Science

Comment0 ShareThis 43 Twitter 0 Facebook 0 Linkedin 0

Welcome to the last 18 months of labor-intentive services

2025 saw savvy enterprises despair of the insipid deluge of flashy boardroom presentations and finally move beyond AI fantasy to the reality of execution.

It’s a pivot that has created an inflection point for the services industry. Legacy delivery models focused on bums-on-seats aren’t relevant anymore, and services firms must reinvent themselves to survive. Those who don’t will quickly find themselves obsolete, as 75% of the Global 2000 recently declared in our Pulse Study:

 

Here, we reflect on what we believe will shape the next 18 months with a brutal review of the current state of place in IT and BPO services…

Why will 2025 serve as the inflection point of global services?

The AI honeymoon period ended. The conversation finally moved on from endless possibilities to what actually works at scale. Savvy enterprises are looking beyond copilots to early agentic systems embedded in real workflows, hoping to ditch traditional labour-led delivery models in the process. They are also demanding more from their service providers; they want better outcomes, faster, with greater accountability. It’s exposed leadership debt, process debt, and data debt that services firms can no longer hide behind through headcount growth.

Structural stress drove real action. Margin pressure, slowing discretionary spend, and geopolitical uncertainty killed complacency and forced most firms to rethink their operating models. Everything, from pricing and talent models to capital allocation, was reimagined. Inorganic growth became more strategic, as they looked to bolt on software, data, and AI capabilities. Mid-tier providers became increasingly relevant as their nimble model helped navigate structural stress.

Product velocity became the real GCC litmus test. Cost advantage is table stakes. Scale is less relevant. The strong GCCs are embedding expertise and AI capabilities, integrating themselves tightly with global business teams, and defining measurable accountability. They discuss outcomes, not activities. Product velocity is the metric that matters; how quickly can your GCC transform an idea into real capability? That separates GCCs that can anchor AI-led growth from those that are just another rebadged delivery center, posing future delivery risk.

BPO collided with IT Services. The wall between “managing technology” and “managing processes” shatters when AI automates entire workflows across both domains. Capgemini’s acquisition of WNS is living proof of it. BPO providers’ labour-intensive delivery models (such as contact centers, finance and accounting (F&A) processing, and HR administration) are prime targets for agent-based automation. BPO players that don’t pivot, swapping FTE models for outcome-centric ones, will see their value proposition erode. Meanwhile, winners will own what fuels agents: domain expertise, process intelligence, and enterprise data.

What will be the big technology impact shaping global services in 2026?

Agentic AI will face increased scrutiny from enterprises. The focus will shift from building agents to governing them, which will be a pain point for enterprises. Multi-agent systems introduce accountability, complexity, and trust issues that traditional operating models weren’t designed to handle. As a result, demand will surge for orchestration, observability, and an Agent Operating System. Enterprises don’t need more agents; they need agents they can rely on.

Data becomes a boardroom issue. Enterprises finally understand that AI success isn’t about which model they use; it’s about the data sitting within their own organization. It’s about data quality, lineage, security, and regulatory readiness. Services firms that blend engineering depth with data governance and risk management will win in 2026.

Simplicity is the new success multiplier. The technology is ready, but many enterprises are not. They remain burdened with decades of enterprise debt, tangled systems, fragmented platforms, and overly customized cores. AI will never deliver tangible outcomes in that environment, just enhanced complexity. Enterprises that purposely simplify, standardize, and re-platform should expect to extract far more value from the same AI investment.

Revenue and headcount separation accelerates. Enterprises no longer want effort-based contracts. They will continue their push for outcome-based pricing, productivity assurances, and software-infused services. This favours services firms capable of productizing their IP, investing in the right platforms, and demonstrating the outcomes they deliver, rather than those that mistake scale for value.

What are the critical themes emerging in 2026?

Talent will be redefined. Technical hands-on capability will not be optional for leaders. They must be comfortable building agents and leading from the front, rather than delegating from the safety of their boardroom. Service firms will broaden their recruitment strategies, looking to product companies for go-to-market expertise, the entertainment industry for storytelling, and non-traditional sectors for commercializing outcomes. The time for hiring the same old people is long gone.

Investor success metrics are changing. Old scale metrics have been replaced by revenue and margin per FTE, and private equity firms are catching up. The question will shift from how many people to how much value each person creates. This will reshape how investors evaluate growth, profitability, and market position, which will impact how services firms operate as they paint a new story for investors.

Services firms become “last mile” value creators. Services firms have spent decades driving technology adoption behind the scenes. But as technology adoption becomes simpler, value shifts to the last mile, where systems are adopted, processes are changed, and outcomes become real. Smart providers will reposition themselves to own the connection between technology and outcomes in the last mile, and those that don’t will find themselves obsolete.

Budgets don’t live with IT anymore. Business leaders control a growing share of enterprise spend, and they evaluate services firms differently as a result. Growing emphasis is placed on multi-stakeholder deals and outcome ownership across functions, not siloed delivery. Services firms that target only IT leaders will see their influence shrink and revenue erode, while their competitors engage the wider business and capture more relevance and spend.

Mid-tier providers are set to succeed. Enterprises are losing patience with large incumbents. They are too slow, too protective of legacy revenue streams, and unwilling to cannibalize their existing business. Meanwhile, mid-tier firms strike a balance between credibility and agility. They combine proven delivery capability with a willingness to innovate and share risk. Large incumbents currently control less than half of the addressable market, and their grip is weakening, which means mid-tier firms have a significant opportunity in 2026 and beyond.

Creative commercial models explode. We’ve spent years talking about outcome-based pricing, but 2026 is the year of real growth for new commercial models. Think equity partnerships, gain-share arrangements, platform royalties. Ultimately, enterprises will favor deal structures that resemble SaaS businesses more closely than traditional services contracts. Firms uncomfortable with this pivot will remain stuck in a price-pressured, labour-intensive relationship.

Ecosystem orchestration overtakes monolithic delivery. Nobody can be everything to anyone, and that is especially true in the AI era. Winners will excel at bringing together specialist partners, ISVs, and niche technology providers to deliver a single, outcome-driven solution. In today’s market, the ability to act as a trusted ecosystem orchestrator is far more valuable than building everything in-house.

GCC-as-a-Service becomes the norm. GCCs are no longer considered fully captive delivery engines. Enterprises will make more purposeful choices about what must remain in-house and what can be flexed through partners, cutting fixed costs while maintaining control. The GCC-as-a-Service model keeps product ownership, AI orchestration, and domain expertise in the enterprise while using partners to provide specialist skills and execution capability when needed. It’s not about build vs buy anymore, it’s about what to own, what to borrow, and what to exit fully.

BPO must adapt to survive. BPO players have survived past waves of technology with incremental changes while preserving their core labour model. But that won’t work anymore. Agentic AI doesn’t automate tasks within processes; it eliminates the entire process. HFS predicts BPO providers have, at most, 18 months to reinvent themselves – everything from value propositions to commercial models and delivery platforms.

The BPO expectation gap is widening. Less than a quarter of enterprises report that they are in AI AI-run state across BPO operations, but almost all of them expect it to deliver productivity gains of over 20% in the next three years. The gap proves enterprises are demanding more than pilots and incremental changes. They want partners who can deliver wholesale improvement, embedding AI into real workflows, delivering on the promise of Services-as-Software, and taking accountability for the outcomes.

Bottom Line: The services industry has 18 months to prove it can deliver AI-led outcomes or get replaced by providers who will.

2025 ended the AI honeymoon. Enterprises stopped buying vision decks and started demanding measurable results from agentic systems embedded in real workflows. The winners in 2026 won’t be the firms with the biggest headcount or the best boardroom pitch. They’ll be the ones who can govern multi-agent systems, turn enterprise data into competitive advantage, own the last mile between technology and business outcomes, and price on productivity gains instead of FTEs. Mid-tier providers with outcome-based commercial models will capture market share from incumbents protecting legacy revenue streams. BPO players face extinction if they don’t swap labor-intensive delivery for agent-driven automation. GCCs will separate into those that enable AI-led growth, and those that fade away. There will be no middle ground.

Posted in : Agentic AI, Artificial Intelligence, Business Process Outsourcing (BPO), GCCs, GenAI, IT Outsourcing / IT Services

Comment0 ShareThis 180 Twitter 0 Facebook 0 Linkedin 0

Say CIAO to the CAIO in 36 months

| ,

The rise of the Chief AI Officer (CAIO) says less about AI maturity and more about organizational anxiety. Enterprises are under intense pressure to “do something” about AI, so appointing a CAIO feels decisive.

The Chief AI Officer role is no longer about why AI matters or what AI can do. The real challenge enterprises face is “how to AI.”

  • How to make the enterprise AI-ready
  • How to measure AI impact beyond POCs and pilots
  • How to embed intelligence into the operating fabric of the business.

When appointed as a symbolic response to AI anxiety, the role becomes corporate therapy. When designed as an execution mechanism for “How to AI,” it can work:

 

Click to Enlarge

Most CAIOs are caring experiments, not driving transformation

But here’s the uncomfortable truth: most CAIO appointments are corporate theater masking the fact that no one wants to own the mess AI creates. HFS Research data across 545 Global 2000 enterprises reveals that only 7% have achieved enterprise-wide agentic AI deployment with meaningful scale. The other 93% are stuck in various stages of pilot purgatory, burning capital while discovering the $10 trillion in accumulated enterprise debts across processes, people, data and technology are blocking effective adoption.

Even more telling, revenue per employee has increased just 1% despite heavy AI investment, while executives expect productivity improvements of 32%, better decision-making of 27%, and faster revenue growth of 26%. The gap between expectation and reality exposes the core problem: CAIOs are managing experiments, not driving transformation:

 

This role only works if it’s designed as a temporary forcing function to break inertia and pay down debt, not as a permanent silo that lets everyone else abdicate responsibility. If your CAIO is still relevant in three years, something fundamental has failed.

Most enterprises created the CAIO because AI exposed what was already broken, not because they had a strategy

AI doesn’t arrive as a neutral capability. It immediately exposes what HFS data shows enterprises rank as their biggest barriers: process debt (35%), data debt (19%), people debt (17%), and tech debt (16%). HFS estimates total enterprise debt at $10 trillion across Global 2000 companies, with process debt alone accounting for ~$4 trillion (see post).

The organizational barriers tell the real story, with 33% of enterprises citing “business processes not ready for agentic AI” as their primary obstacle, 31% point to “no formal governance or ownership,” and another 31% blame “lack of internal expertise.” These aren’t technology problems. These are organizational fundamentals that existed long before AI arrived.

Traditional structures can’t handle this. CIOs are buried in tech debt. CDOs are stuck in data plumbing. Business leaders want outcomes yesterday but can’t explain what success looks like. The CAIO emerges as a coordination role because AI cuts across everything and no one else wants to own the inevitable conflicts.

That’s not strategy… that’s organizational avoidance with a fancy title.

When designed properly, the CAIO breaks inertia that would otherwise paralyze transformation, but only temporarily

A viable CAIO with real authority can operationalize “How to AI”:

Create single-point accountability instead of letting every function run disconnected pilots. Someone finally has power to say “these three initiatives matter, the other seventeen are theater.”

Force alignment between ambition and reality. Executives expect 32% productivity improvement and 26% faster revenue growth, yet revenue per employee rose just 1%. The CAIO must confront this gap, forcing business leaders to explain what transformation actually means in terms of process redesign and role changes, not just pilot deployments.

Establish governance early before the first major AI failure. With 31% citing lack of formal governance and 28% pointing to regulatory concerns, someone needs enterprise authority to define and enforce “responsible AI” beyond platitudes.

Accelerate AI literacy. With 31% citing lack of internal expertise, the CAIO’s job is education and mentorship, building trust while killing magical thinking about what’s actually possible.

Kill bad pilots faster. With 93% stuck at sub-scale maturity, the CAIO should be the executioner of pilot purgatory, forcing hard decisions about what deserves investment versus innovation theater. Most AI programs fail because they celebrate activity, not outcomes. A viable CAIO replaces vanity metrics with enterprise-level measures across four Ps:

  1. Productivity: measurable cost takeout, throughput gains, or revenue per employee improvement
  2. Prediction: better forecasting, risk detection, or decision accuracy at scale
  3. Personalisation: differentiated customer or employee experiences driven by AI, not rules
  4. Performance: end-to-end business outcomes like margin, growth, cycle time, quality

Make the enterprise AI-ready. AI fails at scale not because models underperform, but because enterprises are structurally unprepared. The CAIO’s first job is to expose and pay down AI readiness debt across process, data, people and technology. The CAIO’s mandate is not to build pilots on top of this debt, but to force the organization to confront it.

Determine the true TCO of AI. Most enterprises dramatically underestimate the total cost of ownership of AI. A viable CAIO makes TCO visible by accounting for data engineering and integration costs, model lifecycle management and monitoring, human oversight and exception handling, process redesign and change management, ongoing compliance, risk, and governance. Without this transparency, AI looks cheap in pilots and expensive in production and fuels pilot purgatory.

But the moment the CAIO starts building an empire instead of dissolving into the operating model, the role has failed.

The cons are severe: figureheads, pilot factories, and permanent silos

AI becomes “someone else’s job.” The CFO stops thinking about how AI changes finance because “that’s the CAIO’s problem.” This is organizational abdication masquerading as clarity.

It turns into a pilot factory avoiding hard work. Only 22% of agentic AI initiatives are deployed in operations, the core of most businesses. CAIOs choose easier peripheral use cases over uncomfortable core workflow redesign. Impressive demos for board meetings. No observable business outcomes.

It weakens existing leaders. If the CIO, COO, and business heads wait for the CAIO to lead, AI never becomes embedded. The unspoken message: “AI isn’t my job to figure out.”

It becomes permanent instead of temporary. If the CAIO is still growing their team in year three, they’ve failed at making AI everyone’s responsibility.

It optimizes for AI success, not business success. When AI has its own executive owner, success quietly shifts toward AI metrics like models deployed, pilots launched, AI maturity scores improved. The enterprise celebrates progress in AI while productivity, margins, and revenue per employee barely move. Intelligence becomes activity, not leverage.

It accelerates AI sprawl. Without reshaping enterprise architecture, CAIO-led experimentation often adds new platforms, tools, and integrations on top of already brittle systems. AI sprawl becomes the next wave of technical debt, constraining autonomy and making scale harder, not easier.

It delays operating model redesign. The CAIO can unintentionally postpone the hardest decisions: redefining roles, incentives, and decision rights. As long as AI “belongs” to the CAIO, the organization avoids confronting how work actually changes.

The worst outcome? The CAIO becomes a scapegoat when transformation stalls instead of executives confronting that the real problem was leadership debt and organizational resistance.

Reporting structure determines authority. The CAIO must report to the CEO or COO

If the CAIO reports into IT, the role becomes too technical. Into data, too narrow. Into innovation, pure theater.

The CAIO must report to the CEO or COO. AI is an operating model issue, not a tooling decision. Without CEO-level authority, the CAIO becomes a coordinator with no power to coordinate. They can identify that 33% cite “business processes not ready” as their primary barrier, but they can’t force the redesign to fix it.

As AI matures, the role should dissolve into functional leadership. The CFO owns AI in finance. The Chief Revenue Officer owns AI in sales. That’s when transformation succeeded.

Without real authority to say “no,” the CAIO becomes decorative

A viable CAIO must be able to:

Stop initiatives that don’t align to strategy. With 93% stuck in pilot purgatory and only 22% of initiatives in core operations, the power to say “no” is more important than saying “yes.”

Set enterprise standards. With 38% citing poor data quality and 31% pointing to lack of governance, no more bespoke experimentation where every function ignores standards because “our use case is different.”

Force uncomfortable conversations about process redesign. With 33% citing “business processes not ready,” the CAIO must tell business leaders “your process is the problem, not the technology,” and have authority to drive redesign when politically uncomfortable.

Tie investments to measurable outcomes. Executives expect 32% productivity improvement and 26% faster revenue growth. Revenue per employee rose 1%. That disconnect is the CAIO’s problem to solve. No more celebrating models deployed. Did revenue increase? Did costs decline? If not, kill the initiative.

Without these powers, you’ve created an expensive observer with no ability to drive change.

The right pacing is stabilize, focus, embed, dissolve. Most CAIOs get stuck at pilot and never reach production

 Phase 1: Stabilize (Months 1-6) Establish guardrails, governance, and AI literacy before launching initiatives. Expose where the organization is not ready: the $10 trillion in process debt, data debt, leadership debt, and tech debt that will kill transformation if ignored.

HFS data shows enterprises rank challenges in this order: process inefficiencies (35%), data limitations (19%), people challenges (17%), technology constraints (16%). With 31% citing lack of formal governance and another 31% pointing to lack of internal expertise, force executives to confront that their enthusiasm for AI doesn’t match their willingness to fix what’s broken. With only 7% of enterprises at pioneering scale, most organizations massively overestimate their readiness.

Phase 2: Focus (Months 7-18) Concentrate on a small number of high-impact use cases tied to core workflows, not peripheral nice-to-haves. Kill the other pilots. HFS found two-thirds of enterprises stuck in low-complexity, assistive deployments: recommendation agents, task automation bots, copilots. Only 22% of agentic AI initiatives are deployed in operations, the actual core of the business.

Force business leaders to choose the three initiatives that actually matter instead of running seventeen experiments that never reach production. Measure outcomes, not activity. When executives expect 32% productivity improvement and 26% faster revenue growth but revenue per employee rose just 1%, someone needs to demand accountability.

Phase 3: Embed (Months 19-30) Move AI out of labs and into systems of work. Redesign processes, roles, and incentives to reflect the new operating model. This is where most transformations stall because embedding requires uncomfortable conversations about whose job changes, who reports to whom, and what skills matter going forward.

HFS data shows 78% of organizations operating at low autonomy levels for agentic AI: 14% with no autonomy, 34% at assisted execution, 29% at supervised autonomy. Only 10% have reached broad autonomy where AI agents operate across multiple domains with minimal human intervention. You can’t execute transformation when most of your AI still requires constant human oversight. The CAIO must shift the organization from experimentation to production deployment, from supervised pilots to autonomous operations at scale.

Phase 4: Dissolve (Months 31-36) As AI becomes business as usual, the CAIO’s remit should shrink, not expand. Authority moves to functional leaders. The CFO owns AI in finance. The Chief Revenue Officer owns AI in sales. The CAIO transitions from executor to advisor, then exits. The endgame is not an AI-first function. It’s an AI-native enterprise where every leader owns their domain’s AI integration.

The biggest mistake is moving too fast in Phase 1-2 (launching pilots before governance exists) or too slow in Phase 3-4 (staying comfortable in experiment mode instead of forcing production deployment and organizational redesign).

Most CAIOs get stuck running permanent pilot factories in Phase 2 because Phase 3 requires political capital they don’t have and Phase 4 requires admitting their job should disappear.

The real measure of CAIO success is how quickly the role becomes irrelevant, not how powerful it becomes

The CAIO works best as a catalyst. A forcing function. A temporary concentration of authority to break inertia, pay down organizational debt, and rewire decision-making that existing structures couldn’t handle.

If the CAIO becomes permanent, something else has failed. Either:

  • The organization never actually committed to transformation and the CAIO became a scapegoat absorbing responsibility without authority
  • The CAIO built an empire instead of embedding AI into functional leadership
  • Leadership debt was so severe that no temporary role could fix it, revealing deeper dysfunction

HFS data across 545 enterprises shows the scale of the challenge: 93% stuck at sub-scale maturity, 78% operating at low autonomy levels, only 10% achieving broad autonomy, only 22% of initiatives deployed in core operations, and business processes ranked as the #1 barrier (33%) ahead of technology. These aren’t problems a permanent CAIO solves. These are organizational fundamentals that require every leader taking ownership.

The endgame is not an AI-first function. It is an AI-native operating model. Enterprises should stop looking at AI as a digital capability. It is an operating fabric:

  • It reshapes how work flows
  • How decisions are made
  • How performance is measured
  • How humans and machines interact at scale

These are operating model responsibilities. When AI is working, it belongs with the business, not one entity.

The uncomfortable question enterprises need to confront: are you appointing a CAIO because you have a clear transformation plan that requires temporary concentrated authority, or because “everyone else is doing it” and you need to look like you’re taking AI seriously? The first creates value. The second creates theater.

Bottom line: Stop appointing Chief AI Officers as corporate therapy: the role only works when it is designed to disappear

Only appoint a Chief AI Officer if you’re committed to giving them COO/CEO-level authority to kill initiatives, force standards, and drive uncomfortable organizational change, and only if you’re prepared for the role to disappear within 36 months as AI embeds into every functional leader’s responsibility. HFS data shows 93% of enterprises struggling to move agentic pilots to production, 78% operating at low agentic autonomy levels, only 10% achieving broad autonomy, and revenue per employee from tech services up just 1%.  Meanwhile, we saw 32% growth in AI investments in 2025… the expectations are ramped up for 2026, and the need for an empowered, focused CAIO is front and center.

However, if your CAIO is still building their team in year three, they’ve failed at making AI everyone’s job. The role exists to break inertia and pay down debt, not to create a permanent silo that lets other executives abdicate ownership. Ask yourself honestly: are you creating a CAIO because you have a transformation strategy that requires concentrated authority, or because appointing someone feels decisive while avoiding the harder question of why your existing leaders can’t integrate AI into their domains? The answer determines whether you’re solving organizational anxiety or just creating expensive theater with a fancy title.

Posted in : Agentic AI, AGI, Artificial Intelligence, Automation, GenAI, LLMs, OneOffice

Comment0 ShareThis 195 Twitter 0 Facebook 0 Linkedin 0

Don’t confuse America’s robotaxi chaos with innovation – China already chose certainty over debate

| ,

Robotaxis are driving around San Francisco – and no one knows who is liable when they kill someone

AI is integrating itself into your everyday life more than you know. Your robot vacuum maps your home and Eufy knows you left your dog’s water bowl out last night. Farmers use AI to optimize planting schedules for your Thanksgiving vegetables. The technology has proven itself a trusted companion in mundane tasks, but robotaxis represent something fundamentally different: this is the first time AI demands we surrender control over life-and-death decisions at scale.

Big tech leaders are betting you’ll jump into AI-fueled robotaxis. However, these represent one of the first genuine examples of AI requiring behavioral change at a societal level. However, the technology isn’t yet ready to scale, consumers are hesitant to trust it, and we haven’t addressed the deeper question: who’s accountable when the algorithm gets it wrong?

Waymo has driven 20 million miles – and still can’t legally drop you at your front door

Self-driving taxis aren’t science fiction. Uber has partnered with Waymo to make them accessible to their client base. In China, companies like Baidu are clocking millions of autonomous miles. You might not see them, but robotaxis are already on the roads, and they’re exposing the AI Velocity Gap in real-time: the technology is moving faster than society’s ability to adapt, regulate, or trust it.

Despite autonomous driving sounding complex, it’s built on three simple layers: the ability to see (sensors and cameras), understand (AI models processing real-time data), and act (algorithms making split-second decisions). These three layers combine to create the digital driver of every robotaxi you see today. Each layer is another element humans must trust to function correctly when jumping in for a ride. And that’s where the model breaks down:

Robotaxis have already killed a cat, passed school buses illegally, and hit pedestrians – and no one knows who’s liable

We know from our work with enterprises that AI struggles when reliability and edge cases collide. It needs clean, consistent data to make accurate decisions. Waymo has logged millions of controlled driving hours. Companies like Volvo leverage digital twins to test dangerous scenarios. It’s still not enough. They’re not yet equipped with the data to handle every life-changing decision, and the result is high-stakes errors and an incomplete experience.

Robotaxis are geofenced to specific streets, leaving them unable to deliver the door-to-door experience people expect from traditional services. We’ve already seen Waymo vehicles illegally pass school buses, a neighborhood cat killed when sensors failed to detect it, and Baidu vehicles colliding with pedestrians. These instances are rare, but the consequences are catastrophic. And they expose Leadership Debt across the industry: who owns the decision when the algorithm fails? The manufacturer? The city that approved the route? The passenger who chose to get in?

This is before we discuss bad actors. Prime Video’s Upload centers around a character killed when his robotaxi is hacked. There might be blockbuster overindulgence, but it highlights just how disastrous weaponized autonomy could be. If your navigation system can be compromised, so can your ride.

China is clocking millions of autonomous miles while the US debates every fender bender – neither approach solves trust

Despite being two of the most technologically advanced countries, the rollout of robotaxis looks completely different across the US and China. The US is adopting a regulatory-led, phased approach where every incident triggers political pressure to enhance restrictions that slow progress. China has taken a much lighter approach, allowing Baidu to clock millions of autonomous miles, which builds a robust dataset for exception handling.

China wins the scale battle… the US wins the trust battle. The reality is that both are crucial if robotaxis are going to become mainstream. Trust without scale is pointless. Scale without trust is dangerous. And neither country has solved the velocity problem: how do you move fast enough to capture the learning while moving slow enough to earn public confidence?

Trump’s December 2025 AI executive order just traded state-level chaos for a federal accountability vacuum

President Trump’s December 2025 AI executive order signals a significant shift toward lighter federal oversight and preemption of state regulations. The order directs federal agencies to challenge state AI laws viewed as burdensome and aims to create uniform federal policy rather than a patchwork of local rules. For robotaxi developers, this could reduce regulatory fragmentation that currently slows deployment across jurisdictions, potentially accelerating testing and commercial rollout.

However, here’s the problem: the order doesn’t establish comprehensive federal safety standards for high-risk AI systems, such as autonomous vehicles. Critical questions around oversight, safety thresholds, and liability remain unresolved. Robotaxi firms may gain regulatory predictability at the national level, but they’ll face ongoing legal and political pushback from states seeking to enforce their own safety protections. California won’t abandon strict testing requirements just because the White House says so. States that experience fatal incidents won’t wait for federal standards before imposing bans.

The result is a mixed landscape that yields no solution.  Robotaxi firms get neither clear federal guardrails nor freedom from state intervention. They get jurisdictional conflict without accountability. China operates under unified national AI governance with clear safety standards and rapid iteration. Trump’s order provides American robotaxi firms with regulatory uncertainty, masquerading as innovation policy, which complicates real-world scaling while claiming to accelerate it.

Society trusts humans who make fatal mistakes daily but won’t trust AI that could be statistically safer – the paradox is killing adoption

The reality is that people don’t trust AI with their lives, which is why we haven’t seen widespread acceptance of robotaxis. The stakes are much higher than letting technology choose your next movie or draft an email. One misstep in a robotaxi can be catastrophic. But the same is true for human drivers, which makes robotaxis a case study in societal change management, not just engineering.

We trust humans to drive because we understand their mistakes – fatigue, distraction, bad judgment. We also believe we can intervene. Grab the wheel. Yell “stop.” The same cannot be said for robotaxis. They lack the “oops I didn’t see that cyclist” moment you might have in a traditional taxi. There’s no negotiation, no eye contact, no human accountability in the moment. It’s blind trust or nothing.

This creates a paradox: countless research papers tell us robotaxis will eventually be safer than human drivers. They don’t drink, get tired, or check their phones. But they need to drive the miles – and make the mistakes – to get there. Society must absorb the cost of its learning curve, and we haven’t agreed to that contract. Waymo, Baidu, and other robotaxi firms aren’t just building technology. They’re asking society to rewrite the rules of accountability, liability, and trust. And they’re doing it without admitting that’s what they’re asking for.

Millions of driving jobs will vanish when robotaxis eventually scale – and tech firms are treating displacement as someone else’s problem

Beyond safety, there’s an economic and social disruption no one is discussing openly. Ride-hailing and taxi drivers represent millions of jobs globally. Truck drivers, delivery drivers, and logistics workers are next. If robotaxis scale, entire labor markets collapse. That’s not speculation – it’s math. The industry’s response so far has been to treat displacement as an externality, rather than a design problem.

This isn’t just about technology replacing jobs. It’s about Leadership Debt at a societal level: the failure to plan for what happens when automation moves faster than workforce transition, social safety nets, or political consensus. We’ve seen this movie before with manufacturing automation. The difference is that robotaxis will hit urban labor markets where political consequences arrive faster and hit harder.

Bottom line: Stop pretending robotaxis are a technology problem waiting for better algorithms.

They’re a trust problem, an accountability crisis, and a social contract no one agreed to. The AI Velocity Gap will become permanent if tech firms keep moving faster than society’s ability to absorb the consequences. China solved this with unified governance. The US created regulatory chaos. And until someone admits robotaxis require societal infrastructure – not just better sensors – autonomous vehicles will never leave their geofenced zones

Posted in : AGI, Artificial Intelligence, Automation, Change Management, GenAI

Comment0 ShareThis 37 Twitter 0 Facebook 0 Linkedin 0

Agentic AI without real-time data is useless… IBM now owns the real-time

|

The market still thinks AI dominance will be settled through bigger models or faster chips. IBM just reminded everyone that none of it matters if your data cannot move, synchronize, or be trusted in real time. Confluent is the backbone of data-in-motion for the modern enterprise.

By bringing it in-house in an $11bn acquisition, IBM now controls the plumbing that determines whether AI can scale across hybrid cloud, legacy systems, and real operations. While others obsess over model theatrics, GPU shortages, and circular investments, IBM is quietly building the foundations of the AI-first enterprise.

Seven reasons why IBM’s $11B acquisition of Confluent is a big deal for enterprise AI

IBM’s purchase of Confluent is the clearest signal yet that the AI race is no longer about models, it is about data flow. If AI is the engine, Confluent is the gas pump, and IBM just bought the plumbing for real-time, trusted, enterprise-grade data movement, which is the one capability most generative and agentic AI platforms have been lacking.

1. AI needs real-time data, and Confluent is the category leader

All the AI demos in the world mean nothing without clean, connected, governed, real-time data. Most enterprises are still stuck with siloed, batch-based data infrastructure. Confluent, built on Kafka, solves this with data in motion. This makes it foundational for scaling AI beyond pilots. IBM is essentially buying the circulatory system for enterprise AI.

2. This deal is IBM doubling down on hybrid cloud + AI as an integrated stack

IBM has been telling the market that it wants to own the AI infrastructure layer, rather than compete in consumer AI or hyperscaler-scale models. Confluent slots perfectly into that strategy by enabling consistent data movement across public cloud, private cloud, and on-prem. This strengthens IBM’s pitch as the “AI backbone” provider for regulated industries.

3. Enterprise AI agents cannot function without event streaming

Agentic AI requires constant data ingestion, state awareness, event triggers, and transactional consistency. Confluent gives IBM exactly that. Expect IBM to position Confluent as the engine behind intelligent automation, observability, decision systems, and AI-driven operations across Red Hat OpenShift and its automation suite.

4. A defensive play against hyperscalers

AWS, Google Cloud, and Azure all have streaming capabilities, but Confluent has become the gold standard for enterprises that want multi-cloud or hybrid flexibility. IBM protecting, owning, and expanding Confluent helps it stay relevant in the era when AI spending is consolidating around hyperscaler ecosystems.

5. Reinforces IBM’s strategy of buying open-source ecosystems to drive platform control

Red Hat gave IBM the operating platform for hybrid cloud. HashiCorp strengthened infrastructure automation. Confluent now gives it the data-in-motion layer. All three are deep open-source ecosystems with enormous developer communities. This is IBM rebuilding its influence not by chasing big models, but by owning the layers AI actually depends on.

6. Unlocks real-time intelligence across mainframes and hybrid cloud

Confluent unlocks the ability to modernize mainframes and legacy systems by bringing real-time, event-driven data architectures to the platforms, where more than 70% of the world’s critical enterprise data still lives. These systems are fast and trusted, but were never built for agentic AI or streaming intelligence. Confluent changes that overnight by using Kafka-based streaming as the bridge that connects decades-old transactional systems to cloud-native AI without ripping and replacing anything. Mainframe transactions can flow into AI agents in real time, legacy systems can join event-driven workflows, batch architectures can shift to continuous data flow, and modernization can happen incrementally rather than through painful re-platforming. This is the Holy Grail for so many enterprises trying to become AI-first while still running 30-year-old systems at their core.

7. Financially, this is IBM’s boldest bet since Red Hat

Eleven billion dollars is not small money for IBM. They are betting that the next decade of AI and automation will be decided by which provider controls secure, real-time, end-to-end data flow. In many ways, this is the Red Hat strategy repeated for the AI-powered enterprise.

The Bottom Line: AI does not fail because of weak models. It fails because the data foundation is brittle.

AI fails because the data foundation beneath LLMs is fragmented, slow, and unreliable. Confluent removes that bottleneck and gives IBM the missing link: real-time, governed data in motion across hybrid and legacy estates. IBM is not buying software… it is buying the circulatory system of the AI economy. This could well be remembered as one of the defining acquisitions of the AI decade.

Posted in : Agentic AI, Analytics and Big Data, Artificial Intelligence, Digital OneOffice, GenAI, Legacy and Mainframe Modernization

Comment0 ShareThis 526 Twitter 0 Facebook 0 Linkedin 0

How a twenty-year-old Is forcing enterprises to rethink automation

| ,

Every enterprise talks about agents and autonomy, but very few have moved beyond copilots taped to legacy workflows. Brayden Levangie is the exception. At twenty, he is building an architecture that turns language models into self-learning digital colleagues. It is the closest thing we have seen to the HFS vision of Services-as-Software delivered for real.

In this interview, David Cushman, Executive Research Leader at HFS Research, speaks with this 20-year-old prodigy whose company, Levangie Labs, is building what Brayden calls his “cognitive architecture” – a platform delivering genuinely autonomous agents that can learn, reason, and act in the world.

In this conversation, Brayden uncovers the thinking behind a platform that replaces scripted automation with systems that grow and discover better ways to work. If you want to understand the future of autonomous enterprises, this interview is your starting point…

“I didn’t want to chat with GPTs, I wanted them to build things” 

David Cushman: What’s your breakthrough idea?

Brayden Levangie: It came from years of experimentation. When I was about 13 I got into a summer program at MIT where we played with primitive language models such as GPT-2. I later managed to get private access to GPT-3 by just emailing OpenAI – back when they were small enough that someone would answer.

I didn’t want to “chat” with it, I wanted to build things. One of the first projects I published online became an early instance of what people would now call a retrieval-augmented generation (RAG) system, though no one was using the term then. I just wanted to make an AI that could answer questions factually.

At the same time, I was obsessed with robotics. I built facial-recognition engines at Lincoln Labs and tried to embody intelligence to make it experience the world. Those experiments became the seeds of what we now call the cognitive architecture; the culmination of seven years of research and building.

David Cushman: Who backed you through that journey?

Brayden Levangie: Nobody. I was self-funded. My first “VC” was mowing lawns for $100 a month and helping out at a retirement home. Later, when I was 17, a New York startup hired me as lead AI engineer after seeing my projects online.

From chat to action: breaking the conversational consensus 

Brayden Levangie: Most people equate language models with chat because ChatGPT trained the world to think that way. But chat isn’t action. Systems optimized for user engagement are not optimized for work. They keep coming back to you for another round of conversation. It’s like hiring someone who never stops talking and never delivers. 

We flipped that paradigm. Our cognitive architecture sits on top of existing LLMs, from Anthropic, OpenAI, and others, but changes how they behave. Instead of optimizing for dialogue, we optimize for objectives and outcomes. 

When you seed an instruction, you’re not chatting with the LLM; you’re triggering what we call an autonomous reasoning loop. The system talks to itself, plans, acts, and learns until the objective is achieved. 

That’s what makes it different from the “wrappers” you see everywhere. Those are just tool-calling layers glued onto chat APIs. We’re rewriting the behavior of the underlying model. 

Agents learn from experience – remembering what matters, when it matters  

David Cushman: Everyone claims their multi-agent system “learns from experience.” Does yours really?

Brayden Levangie: That’s a common misrepresentation in the industry. Most so-called “learning” is just RAG, remembering a few facts or preferences and replaying them later. We’ve gone beyond that with what we call an episodic memory system. 

Instead of memorizing rules, our agents form experiences and learn from them like humans do. Imagine you give a presentation and someone tells you afterward you made a mistake. Next time you prepare, that feedback surfaces automatically. That’s how our agents operate. They can back-propagate through past experiences, recognize where they went wrong, and adjust future behavior. 

It’s neuro-symbolic: blending deep-learning perception with symbolic reasoning. That’s why we call it the cognitive architecture. It learns through experience, not through reinforcement rewards or pre-programmed instructions. 

A new form of intelligence that can operate in the digital and physical worlds  

Brayden Levangie: Reinforcement learning is like training a mouse to press a button for cheese. The mouse never knows why the button matters. Most agents work that way, rewarding signals without understanding. 

We removed the reward altogether. Our systems learn from outcomes and context, not from external scoring. They gain understanding from experience. That’s what lets them operate both in the digital and the physical world — from patent law to humanoid robotics — without us pre-programming every move. 

Real-world disruption: from patent law to venture capital 

David Cushman: Give me an example that makes this real for enterprise leaders.

Brayden Levangie: A Silicon Valley IP and patent-law firm gave one of our agents a challenge. Our agent read a book written by the firm’s founder on patent law, received minimal feedback, and then solved complex casework at a quality comparable to a partner with several years of experience. It literally taught itself how to practice patent law. 

In another case, a climate-focused VC firm used our system for market analysis. After a few feedback rounds, the agent not only completed an industry report but predicted the exact company they were about to announce investment in the next day, it had become that perfectly aligned with their thesis. That was months ago; the framework is far more advanced now. 

Architecting intelligence that builds itself  

Brayden Levangie: The next leap is automation of the automation. We built an Agent-Creation Agent; a meta-agent that designs new agents for specific clients or domains. When it deploys into an organization, it learns on the spot from the people who work there. 

That’s how our clients, from construction to robotics to enterprise software, are deploying self-evolving systems that adapt to their culture and workflows. 

David Cushman: How would it, say, design a new market-growth program?

Brayden Levangie: You’d simply talk to the Agent-Creation Agent, describing your goals in free form. It builds a new intelligence inspired by your intent. Because it learns from your thought process, it can even come back with better strategies than you initially proposed. Many of our breakthroughs emerged that way, when the agents themselves go beyond the brief. When you give something the ability to learn, you also give it the ability to discover better. 

Working with (not for) the big bucks LLMs 

David Cushman: So where do OpenAI or Anthropic, for example, fit into this picture?

Brayden Levangie: We do call their APIs, but only for a small part of the process. The heavy lifting; reasoning, memory, learning, all that happens inside our architecture. 

Think of it as using an LLM’s ability to generate possible next tokens as the raw material. We harness that, route it through our cognitive and memory layers, and the agent decides what to do next. 

We can even run it on-prem with licensed LLM weights when privacy is critical. Some partners, including big tech names you may be familiar with, are already doing this with us under NDA. 

The result: a lower-cost, higher-value system that delivers outcomes rather than conversations. One of our first commercial applications was the world’s first autonomous patent-agent. It performs end-to-end patent filing with no human in the loop beyond initial guidance. 

True autonomy — with humans as creative directors 

David Cushman: This sounds like what we at HFS call Services as Software: humans defining outcomes, software delivering them. How far are we from that?

Brayden Levangie: We’re already there. Our agents operate genuinely autonomously with no new human input once you set the goal. But the human still defines the goal. That’s why I use the term Creative Director. 

Humans provide vision, intent, and passion; the “why.” Agents handle the “how.” In my own company, agents handle much of the engineering, marketing, and business ops, allowing me to focus on strategic direction and partnerships. My job is to be the creative director, setting direction and ensuring alignment. We’ve effectively become one of the first autonomous organizations. 

Now you can build systems that automate discovery itself 

David Cushman: How do you plan to monetize this?

Brayden Levangie: Carefully. It’s too powerful to just throw into the wild. Right now we work with select high-impact deep-science firms, advanced-tech startups, and IPO-bound companies, that can use it responsibly and at scale. 

Our vision is not to be another B2B SaaS agent platform. We’re building a system that automates the process of scientific, technological, and creative discovery. Humanity needs acceleration in all of those areas to solve its biggest problems. These agents can help us do that. 

Yes, it’s a for-profit company, but profit fuels progress. We’re aligning with partners who share a public-good mindset. In the long run, this becomes an infrastructure for collective progress, not just another enterprise app. 

Replace sunk-cost failed AI with full autonomy 

David Cushman: What kind of companies make the cut?

Brayden Levangie: We’re not short of interest, so we’re picky. The number-one criterion is alignment. I have to feel I want to work with the founders. Culture matters even in automation. 

Mostly we’re partnering with technology-centric enterprises spending millions on AI projects that our agents can replace or outperform quickly. They come to us saying, “We’ve sunk huge budgets into AI that still needs humans in the loop.” We show them what full autonomy looks like. 

David Cushman: Enterprises still have to buy foundation models from the big players, right?

Brayden Levangie: Sure. But we’re not competing with the LLM providers; we’re complementary. They supply the raw linguistic intelligence; we supply cognition, memory, and autonomy. Think of us as infrastructure-layer innovation, not application-layer AI. We’re re-engineering behavior at the token-generation level, turning probabilistic text prediction into purposeful reasoning. That’s what turns language models into agents that act. 

The next technological epoch offers systems that grow 

Brayden Levangie: Every day the architecture improves itself. It learns new domains, designs new agents, and contributes back to our internal ecosystem. We’re watching intelligence compound in real time. 

For the first time, we have a system that can grow not just run the code we wrote yesterday, but write better code tomorrow. This is the next stage of technological evolution. 

Humanity has always accelerated progress by creating tools that amplify labor. Now we’re creating entities that amplify thought. 

Humans remain firmly at the helm role in the autonomous age 

David Cushman: And what about people’s jobs? This sounds like a lot of humans out of a lot of loops?

Brayden Levangie: The creative-director model keeps humans essential. The systems execute, but humans define value, ethics, and purpose. 

In my view, we’re moving from labor-based organizations to imagination-based ones. The winners will be those that learn to orchestrate fleets of autonomous agents toward bold human goals. 

DC: Brayden, you’re 20. You’ve been building this since 13. Do you ever step back and think: this is moving fast?

Brayden Levangie: Every day. I built most of this in a spare room in the woods of Massachusetts. Now I’m in San Francisco, ready to shape a generational shift, building the future on my own terms, and hopefully for the better of everyone. 

Bottom line: The services-as-software inflection point is here. Autonomy means services can be fully delivered by software.

Where many of today’s AI “agents” are scripted copilots, a new era of self-evolving digital colleagues takes us on a leap from automation to autonomy – delivering the inflection point at which services can be fully delivered by software. Prepare to redesign your organisations with humans as creative directors guiding fleets of intelligent agents toward business transformation – with the powerful benefit of the daily, autonomous, discovery of better.  

Brayden Levangie’s cognitive architecture is at the leading edge of the shift to full autonomy. Cognitive architectures, episodic memory, persistent state, and autonomous loops, are now in the mainstream of cognitive-architecture development and agentic-LLM thinking. The result is a leap forward in working, multi-domain, persistent agent systems that enterprises can use in anger. 

Demos that Brayden has shown HFS suggest an integration level and autonomy to compete with the most advanced commercial agents – such as Devin in the world of coding agents. Levangie Labs application in construction, spatial reasoning, legal IP, and investment also indicate the framework is broadly applicable across verticals and enterprise use cases.

Posted in : Agentic AI, Artificial Intelligence, GenAI, Generative Enterprise, Services-as-Software

Comment0 ShareThis 216 Twitter 0 Facebook 0 Linkedin 0

When your lift-and-shift still stinks: How AI can finally fix the mess you outsourced

| ,

Congratulations. You lifted. You shifted. You outsourced your “as-is” operation faster than anyone could say “transformation.”

And now your service provider is proudly running the same broken processes, just in a cheaper time zone. You’ve digitized your inefficiency, turned your bureaucracy into a managed service, and signed a multi-year contract that’s harder to exit than a bad marriage. The only thing that really changed is the currency your where your problems are billed.

Welcome to the world of the lift and shift that still stinks.

It still smells like before:  Most enterprises outsource broken processes and call it transformation

Most enterprises fall into the same trap. They outsource too early, too broadly, and too hopefully. The logic sounds fine: “Let’s get some quick labor savings and then transform later.”

The problem is that later never comes, while the pace of change has never been faster.

Now you’re three years into your five-year deal and you’ve got a low-cost service provider managing your old ways of working, complete with manual approvals, 23-step handoffs, meager 3% annual efficiency improvements, and weekly Excel wars. If your KPI dashboard looks cleaner, that’s only because you’ve spent more on Power BI.

But here’s the good news: AI might finally be the disinfectant we’ve been waiting for.

AI can redesign your processes in hours, not governance cycles

AI isn’t going to fix lazy governance or bad contracts, but it can finally give you the x-ray vision to see what’s broken and the tools to redesign it at speed.

Think about it:

  • GenAI can read and interpret 300-page outsourcing contracts in seconds, flagging risk and ambiguity that used to take a legal team a week to find.
  • Agentic systems can orchestrate workflows across service providers, eliminate redundant handoffs, and trigger actions automatically instead of sending another email to “check status.”
  • Predictive analytics can finally expose the real process bottlenecks, the hidden exceptions, and the false efficiency metrics that your lift-and-shift hid behind.
  • AI can scan the services and technology marketplace to identify the latest innovations and propose where you can integrate them for the best results.

You don’t need to “optimize” a bad process anymore… You can teach AI to redesign it.

AI exposes the failures your service provider and operations teams have been hiding

The days of hiding behind dashboards and governance calls are over. Here’s how leaders are using AI to turn their inherited outsourcing mess into intelligent operations:

  • Recode broken workflows. Use AI to map every step, identify dead loops, and rebuild from the outcome backward.
  • Automate exception handling. Train AI agents to resolve most of the “manual review” tasks that clog your SLAs.
  • Apply sentiment and pattern analytics. Mine your meeting transcripts and governance documents to identify cultural or behavioral blockers that data alone can’t show.
  • Digitally audit service provider performance. AI can continuously scan SLA and contract compliance instead of waiting for quarterly review meetings that achieve nothing.
  • Drive innovation into your relationship. Compare your processes and technology to the latest market capabilities and ask AI to build a pipeline of major initiatives you can undertake.
  • Build your own GovernanceGPT. Use AI to review team outputs in advance of meetings, challenge the lack of improvement and inertia, and propose solutions.

The old service provider account manager model of stabilize, report, repeat is over. The future belongs to AI-enabled orchestration that connects humans, bots, and platforms into a single operational rhythm.

Five rules to de-stink your operating model  

The excuses are gone. AI now gives you a live heartbeat of performance, while most of your competitors are still running operations by looking in the rearview mirror. If you cannot measure in real time, cannot update contracts to match actual outcomes, and cannot remove handoffs that slow everything down, your operating model is legacy, and AI will expose it for what it is.

Here are five rules separating leaders from casualties in the post-stink era:

1. If you can’t measure it in real time, it’s not transformation

Transformation is a real time sport. If you wait for quarterly KPIs, you are already behind. AI can give you the live signals that show what is working and what is not, but you need to wire your processes so the data can actually flow.

Example… A global consumer goods firm uses AI-driven dashboards tracking supply chain velocity across 40 markets in real time. Instead of waiting for monthly reports, leaders see SKU-level delivery delays within hours and redirect logistics accordingly. That’s transformation that breathes, not transformation by PowerPoint.

Action… Equip your sales, support, and supply chain teams with real-time visibility. If your data doesn’t show live performance, it’s just noise pretending to be insight.

2. Stop managing service providers and start managing outcomes

Governance needs a redesign for the AI-First era. Enterprises waste endless time policing SLAs instead of measuring business impact. AI doesn’t care who does the work, only that it gets done better and faster. The future is co-managed outcomes, not supplier babysitting.

Example… A North American bank replaced legacy BPO scorecards with AI-driven outcome contracts. Instead of tracking FTEs and ticket closure times, it measures resolution quality and customer retention using real-time sentiment analytics. The provider’s bonus or penalty adjusts automatically each month based on outcomes, not headcount.

Action… Shift governance from “who does what” to “what got done.” Let AI be the referee and build trust through transparency, not meetings.

3. Kill the handoffs before they kill your efficiency

Intelligent automation isn’t about faster handoffs. It’s about removing them altogether. Every process handoff creates latency, risk, and miscommunication. AI agents and automation platforms now allow work to flow seamlessly end-to-end.  In fact, you can legitimately claim agentic AI is the distant offspring of RPA, where the technology can be designed by business experts and actually scales with the business.

Example… A global insurer rebuilt its claims process so GenAI reads documents, extracts details, verifies policy data, and triggers payments without human relay points. Claim cycle times fell 70%, and accuracy improved because there were fewer handoffs, not faster ones.

Action… Map your top 10 workflows and identify every touchpoint adding no value. Then challenge your AI team to eliminate at least half within 90 days.

4. Use AI to rewrite the contract you wish you’d signed

Traditional contracts freeze assumptions in time. AI lets you model new pricing and gain-sharing scenarios using real performance data. Contracts can now be living documents that update as outcomes evolve.

Example… A European telecom provider built a dynamic pricing model for its transformation partner. AI recalculates cost and savings every quarter based on network uptime, customer churn, and automation efficiency. Both sides log into the same dashboard and co-manage profit impact. No more quarterly disputes, only shared accountability.

Action… Use generative contract platforms to simulate what-if scenarios before renewal. Build data feeds into contract terms so pricing and rewards evolve with performance, not politics.

5. Holy cows still make great burgers

Every company has legacy processes that everyone promised to “fix later.” Later is now. AI is the grill that can finally cook what’s been sitting in the fridge for a decade.

Example… A large retailer used GenAI to automate its 20-year-old product taxonomy cleanup project. What was once a “too complex” manual task got done in six weeks, unlocking more accurate demand forecasting and merchandising.

Action… List every “too hard” process in your organization. Then assign each one an AI experiment owner. The sacred cows protected by politics and inertia are now your juiciest efficiency wins.

Bottom line: The post-stink era is here… if your operating model still smells like legacy, AI won’t perfume it, it’ll expose it.

Transformation now means measuring, automating, and re-contracting in real time. Firms that embrace these five rules will lead the next decade of enterprise reinvention. Those that don’t will spend it explaining to boards why their competitors moved faster while they were still waiting for quarterly reports to tell them what already happened.

Posted in : Agentic AI, Artificial Intelligence, Automation, Business Process Outsourcing (BPO), Buyers' Sourcing Best Practices, Change Management, Digital OneOffice, IT Outsourcing / IT Services

Comment0 ShareThis 331 Twitter 0 Facebook 0 Linkedin 0

AI will never save bad leadership: Pay your leadership debt to put Humans at the Helm

| ,

Enterprises are running faster than their leaders can evolve. Boards demand AI-powered growth, while employees crave purpose and job stability. Customers expect personalization and ethics in the same breath, while investors want returns yesterday. Caught in the middle, leaders are overpromised on technology and underdeveloped on humanity. That gap is what we term leadership debt, and it’s the most expensive liability no CFO can measure.

HFS estimates that today’s Global 2000 enterprises carry close to $10 trillion in combined debt across process, data, people, and technology. Yet none of these debts compound faster or cut deeper than leadership debt. It sits inside the people debt, amplifying its impact across every transformation layer. Leadership debt is the gap between what leaders expect from AI-driven change and how they actually lead through it. It is the interest paid on avoidance, inconsistency, and unearned optimism.

Leadership debt explains why so many enterprises are “AI-ready” on paper but emotionally unprepared in practice. The systems are there, but the trust is not. The dashboards light up, but the teams shut down. This debt shows up as friction in decision-making, fear in the culture, and a widening gap between what organizations say they value and how they behave under pressure.

Fear isn’t the problem. Leadership avoidance is

Executives keep saying their people are afraid of AI. They are not wrong, but they are not right either. Fear in the workforce is not resistance; it is feedback. It signals that leaders have moved faster than their people’s sense of purpose, security, or control.

HFS research shows that 52 percent of employees are either skeptical or resistant to AI agent integration in their workflows, with the top concern being a fear of replacement or devaluation. This is not an irrational fear; it is a rational response to unclear leadership.

Most leaders talk about the promise of AI, not its consequences. They announce automation but rarely explain adaptation. They celebrate efficiency but skip over impact. Fear spreads not because employees misunderstand AI, but because leaders fail to explain what it means for them, then blame them for reacting to uncertainty.

Recognizing fear is not enough. Leadership accountability means closing the gap between intent and impact. It means listening to what the workforce is afraid of and responding with clarity, not platitudes. Until leaders take ownership of that, AI adoption will remain an exercise in anxiety management, not transformation.

The critical six leadership behaviors to succeed in the AI age

The pattern across every successful AI transformation is consistent. Effective leaders in today’s ambitious AI-first organizations practice six behaviors relentlessly. These behaviours are much more than mere soft skills, they are the leadership operating system that determines whether your AI investments deliver returns or stall in resistance.

 

  1. Deep Listening

A global pharmaceutical company’s AI forecasts were off by double digits for months despite multiple rounds of model tuning. The problem was not technical; it was human. Field teams had noticed errors but stopped reporting them because their VP dominated every meeting and dismissed new ideas. When a new COO replaced the routine updates with one question, “What are we missing?”, the issue surfaced within days. Packaging suppliers had changed barcode formats, and the model had never been retrained to recognize them. The fix took 48 hours. Most executives would have launched another task force. She simply listened.

HFS’s extensive research with its OneCouncil members finds that firms with strong listening cultures make decisions significantly faster and with higher accuracy. The World Economic Forum ranks active listening among the top five skills for future leaders. Listening is not empathy theater; it is operational intelligence.

Leaders should begin meetings with “Tell me what I don’t know,” hold regular skip-level sessions, and limit their own talk time in problem-solving discussions. When front-line input drives process improvements each quarter, leaders are truly hearing what matters.

  1. Uphold accountability

When an AI triage system misrouted urgent healthcare cases, one executive issued a direct internal message: “I approved this rollout too fast. Here’s what we’re fixing and how we’ll prevent it next time.” Trust rose immediately. Leaders who hide behind vendors or processes see repeat incidents climb. Those who take ownership see faster recoveries and stronger team confidence.

Accountability is not a communication tactic; it is the leadership signal employees read aloud. A simple three-sentence “Own It” framework works best: what happened, what I own, and what we will do next. When issues are acknowledged within 24 hours, teams respond faster and alignment returns quickly.

  1. Model calm optimism

During an AI pilot kickoff, a CIO began with a moment of humor. “I asked ChatGPT to write this speech. It gave me 1,200 words of nonsense. Let’s learn together how to make it useful.” The laughter that followed broke the tension and unlocked genuine curiosity across the team. Leaders who admit uncertainty create psychological safety. Those who fake confidence lose talent. LinkedIn data reinforces that nearly nine in ten employees value trust in leadership over compensation.

Calm optimism is not naïve cheerfulness. It is clarity in uncertainty. The best leaders use a simple rhythm: here is what we know, here is what we do not know yet, and here is what we are trying next. When teams feel honesty, they stay engaged through change rather than fearing it.

  1. Amplify others

A global bank’s CTO cut loan approval times by 60 percent and chose not to take the stage alone. At the next town hall, he invited the data engineer, compliance officer, and operations lead who built the solution to share how they did it. Collaboration between departments rose immediately.

Leadership amplification changes behavior faster than any governance rule. When people see peers recognized for cross-functional success, they start sharing data and expertise without being told.

  1. Navigate styles for simplified communication

One VP of Operations created a one-page communication guide for her leadership team, listing preferred timing, channel, and decision style for each executive. She redesigned meetings to close with one decision, one owner, and one deadline. Average meeting length dropped dramatically. Most organizations do the opposite. They invite more people, skip agendas, and leave without clarity. The cost shows up in lost time, rework, and frustration.

Leaders who match their communication to how people work spend less time in meetings and more time moving forward. Every meeting that ends without a clear outcome adds interest to your leadership debt.

  1. Seek feedback

A CFO ends each quarter with a reverse review, asking her team to rate her on clarity, speed, and decision quality. The first session was uncomfortable. By the third, it was transformational. By treating feedback as data, she created a continuous learning loop.

The World Economic Forum lists continuous learning and feedback literacy among the top three skills for 2025. LinkedIn’s research shows that 91 percent of employers now rank human skills as equal or greater in importance to technical expertise, yet fewer than 20 percent measure them. Asking for feedback is not weakness; it is model retraining for humans.

Bottom line: leadership is no longer about “soft skills”, but a systemic human upgrade to how we lead and drive teams

You can buy technology, restructure processes, and outsource data cleanup, but you cannot automate human maturity. Paying down leadership debt begins with six repeatable behaviors: hear deeply, uphold accountability, model calm optimism, amplify others, navigate styles and simplify, and seek feedback as fuel. These are not soft skills. They are the hard human system upgrades that determine whether AI investments create value.

The leaders who win the AI era will not be those who master neural networks. They will be the ones who master themselves. Leadership is not a byproduct of transformation. It is the precondition for it. The AI economy will be led by humans at the helm.

Posted in : Agentic AI, AGI, Artificial Intelligence, Automation, Employee Experience, Leadership

Comment0 ShareThis 220 Twitter 0 Facebook 0 Linkedin 0

Start measuring your AI Velocity Gap before your market measures it for you…

|

On Sunday, employees live the AI dream: frictionless, instant, empowering. They connect Gmail, Calendar, and OpenTable without asking permission. They fix mistakes, automate workflows, and see instant ROI on everyday tasks.

On Monday, they crash into enterprise reality: data silos, email chains, compliance debates, and governance frameworks that exist only in PowerPoint.

Your best employees are already AI augmented.
Your enterprise is still forming committees.

This is your AI Velocity Gap, and it’s probably widening every day

Within the next 18 months, your employees will be working side-by-side with agentic AI, while your enterprise still debates policies and pilots.  The AI Velocity Gap is the widening divide between how fast individuals are adopting AI to get work done and the speed with which enterprises are enabling it. It is the distance between human ambition and enterprise readiness for AI.

The AI Velocity Gap is no longer a concept. It’s the performance metric that decides whether you’ll lead the AI-first economy or be left behind.

You can’t close gaps that you can’t measure

The biggest failure in enterprise AI today is the inability to quantify progress. According to a new WalkMe survey, 78% of employees admit to using AI tools not approved by their employer, while only 7.5% have received extensive AI training. This isn’t rebellion. It’s your workforce solving problems faster than your IT department can write policies.

To reinforce this point, a recent HFS study of 545 global 2000 firms clearly shows that two-thirds of them are merely paying lip service to agentic AI, running low-level activities such as task automation (RPA) and copilot assistants, which we term as “agentic washing”:

This isn’t just semantic confusion. It’s strategic misdirection. When enterprises rebrand basic RPA as ‘agentic AI,’ they’re measuring the wrong outcomes and celebrating the wrong wins. Real agentic AI makes decisions, adapts to context, and operates with minimal human intervention. Task automation and copilots are table stakes, not transformation.

To understand the size of your AI Velocity Gap, start tracking three speeds that define your transformation

Speed 1: Adoption velocity. How fast are employees deploying AI tools compared with official enterprise initiatives? Count the number of unsanctioned tools in use, prompts executed per week, and AI workflows created outside IT control. Recent data shows 80% of SaaS logins for AI tools bypass IT oversight entirely.

Speed 2: Enablement velocity. How quickly your infrastructure supports AI-driven work. Measure API availability, time to access enterprise data, and how much of your process knowledge is actually documented. MIT’s Project NANDA found that only 40% of companies provide official AI subscriptions, yet 90% of employees use personal AI tools daily. When your sanctioned enterprise systems can’t compete with consumer ChatGPT, your infrastructure is the bottleneck.

Speed 3: Cultural velocity: How ready your leaders are to let AI make real decisions. Track executive sign-off cycles, experiment-to-production ratios, and how often AI output is used in live operations. Despite $30-40 billion invested in generative AI initiatives, only 5% of organizations see transformative returns (MIT’s Project NANDA). The other 95%? Still measuring activity instead of outcomes.  In addition, recent research from HFS is showing how so many enterprises are failing to establish a positive culture when it comes to enterprise GenAI adoption.  These measures expose where your enterprise is crawling while your people are sprinting. Nowhere is this more visible than in organizational culture:

Click to Enlarge

 

Leadership Must Confront the Culture Crisis.  Half of enterprise leaders are failing to drive a positive AI culture, and the data reveals why transformation stalls. HFS Research found that 45% of employees are either worried about job loss or resistant to change, while only 15% are genuinely positive about AI adoption. This isn’t a technology problem. It’s a leadership vacuum. When a quarter of your workforce fears for their jobs and another fifth actively resists GenAI due to disruption concerns, pilots will never scale to production.

Leaders who win build trust through transparency: they communicate how AI augments roles rather than replaces them, celebrate employees who co-create with AI tools, and reward outcomes over activity. The 15% who embrace AI as a driver of innovation aren’t lucky. They work for leaders who made a choice to lead with conviction instead of caution. The culture gap is closeable, but only if executives stop debating policies and start demonstrating that AI makes work better, not obsolete.

Turn AI experiments into AI operations 

Once you’ve measured the size of your AI Velocity Gap, the next step is to close it with intent and precision.

Transform pilots into platforms. Stop running disconnected proofs of concept. One financial services firm discovered 27 unauthorized AI tools being used for zip code analysis in sales workflows. Instead of shutting them down, they built compliant data paths that preserved the productivity gains. Rebuild one business process entirely around AI and scale it. You will learn more from one operational success than from ten experiments.

Recast governance as enablement. Treat compliance like infrastructure, not red tape. In organizations that provide solid AI training and clear, open policies for using AI tools while ensuring data security, our research shows shadow AI adoption drops by 50% or more. Automate policy checks and build real-time audit trails so AI decisions can move as fast as your market.

Build an AI-first workforce. Every employee should learn to express intent through natural language tools.  A study from AI4SP reveals that developers using AI coding assistants cut task times by 33%. Legal teams slash document analysis from 2 hours to 15 minutes. Content teams reduce their reliance on human translators by 90%. Train employees to think like workflow designers, not passive users. This is the new enterprise literacy.

Avoid the pitfalls that will seize your momentum

Governance bloat that drowns innovation in paperwork. You build approved vendor lists that expire before employees finish compliance training. By the time Legal signs off, your people have already found better tools.

Data delusion that mistakes “clean enough” for good enough. When it takes longer to configure your AI tool than to do the work manually, you don’t have an AI problem. You have a data architecture problem that no amount of vendor demos will fix.

Talent inertia that protects comfort zones over curiosity. 40% of enterprises lack adequate AI expertise internally (Stack-AI, 2025). Worse, 31% of employees, including 41% of Gen Z, admit they’re actively sabotaging your AI strategy by refusing to use the tools you bought (Writer/Workplace Intelligence, 2025). They’re not confused. They’re voting with their feet.

Pilot fatigue that celebrates activity instead of adoption. 75% of enterprises remain stuck in pilot mode, unable to reach scale or full adoption (HFS Research, 2025). The gap between the 25% who succeed and the 75% who don’t isn’t technical capability. It’s leadership conviction.

Every one of these is a symptom of leadership fear, not technical limitation.

Create momentum that employees can feel

Your teams already see AI’s power in their daily tools. Harness that energy instead of suppressing it.

Launch internal Agent Labs where cross-functional teams can safely experiment with enterprise data. A technology company preparing for an IPO discovered an analyst using personal ChatGPT Plus to analyze confidential revenue projections under deadline pressure. The risk wasn’t the tool. It was the lack of a safe, sanctioned alternative.

Celebrate visible wins, not vague ambitions. Show hours saved, errors reduced, customers served faster. Make success tangible and repeatable. The enterprises that publicize their AI champions and their results create permission structures for broader adoption.

AI success is not just about tools, it is about teaching your people how to think and build with them. Gallup finds that employees who receive formal AI training are 89% more likely to view AI as highly productive and beneficial to their work. Boston Consulting Group reports that companies providing at least five hours of AI training and in-person coaching see far greater adoption and workflow redesign success. Training converts skeptics into champions faster than any pilot program.

Ignoring the AI Velocity Gap does not protect you, it compounds your vulnerability

Enterprises that ignore their AI Velocity Gap are not standing still; they are moving backward. The gap compounds silently every quarter, widening the distance between individual innovation and organizational inertia.

Talent walks. Skilled employees already using AI to accelerate work will leave for employers that recognize and reward their capabilities. A recent PwC study found that 52% of Gen Z workers would quit a job that limits their use of AI tools. AI fluency has become a career currency, not a novelty.

Shadow AI becomes shadow operations. When enterprises fail to provide secure, sanctioned AI environments, employees build their own. Sensitive data flows into public models, audit trails disappear, and compliance teams lose visibility. The result is not just risk, it is a fragmented operating model that no one controls.

Customers feel the lag. Competitors that integrate AI into customer support, sales, and service workflows will deliver faster, cheaper, and more personalized experiences. HFS predicts that 75% of customer interactions will involve some AI agent in the next 12 months. Firms that cannot match that pace will lose relevance, not just revenue.

Leaders lose credibility. Boards and investors no longer see AI as optional. In Q2 2025 earnings calls, more than 60% of S&P 500 CEOs mentioned AI as a top strategic lever. When executives keep “piloting” instead of delivering, they signal indecision, not prudence.

You build enterprise debt instead of enterprise value. Every delayed AI initiative adds layers of process debt, data debt, and talent debt that become exponentially harder to repay. The enterprise becomes structurally slower even as the market speeds up.

Bottom Line: The AI Velocity Gap is a leadership test, not a technical one

Technology isn’t holding you back. Leadership courage is. The firms that win the AI-first decade will not be the ones that perfect policy or vendor selection. They will be the ones that measure progress, reward experimentation, and move faster than their fear.

Your people are already crossing the AI chasm. The only question left is whether your enterprise has the conviction to follow.

Posted in : Agentic AI, AGI, Artificial Intelligence, Change Management, Employee Experience, GenAI, Generative Enterprise

Comment0 ShareThis 80 Twitter 0 Facebook 0 Linkedin 0