Enterprises are running faster than their leaders can evolve. Boards demand AI-powered growth, while employees crave purpose and job stability. Customers expect personalization and ethics in the same breath, while investors want returns yesterday. Caught in the middle, leaders are overpromised on technology and underdeveloped on humanity. That gap is what we term leadership debt, and it’s the most expensive liability no CFO can measure.
HFS estimates that today’s Global 2000 enterprises carry close to $10 trillion in combined debt across process, data, people, and technology. Yet none of these debts compound faster or cut deeper than leadership debt. It sits inside the people debt, amplifying its impact across every transformation layer. Leadership debt is the gap between what leaders expect from AI-driven change and how they actually lead through it. It is the interest paid on avoidance, inconsistency, and unearned optimism.
Leadership debt explains why so many enterprises are “AI-ready” on paper but emotionally unprepared in practice. The systems are there, but the trust is not. The dashboards light up, but the teams shut down. This debt shows up as friction in decision-making, fear in the culture, and a widening gap between what organizations say they value and how they behave under pressure.
Fear isn’t the problem. Leadership avoidance is
Executives keep saying their people are afraid of AI. They are not wrong, but they are not right either. Fear in the workforce is not resistance; it is feedback. It signals that leaders have moved faster than their people’s sense of purpose, security, or control.
HFS research shows that 52 percent of employees are either skeptical or resistant to AI agent integration in their workflows, with the top concern being a fear of replacement or devaluation. This is not an irrational fear; it is a rational response to unclear leadership.
Most leaders talk about the promise of AI, not its consequences. They announce automation but rarely explain adaptation. They celebrate efficiency but skip over impact. Fear spreads not because employees misunderstand AI, but because leaders fail to explain what it means for them, then blame them for reacting to uncertainty.
Recognizing fear is not enough. Leadership accountability means closing the gap between intent and impact. It means listening to what the workforce is afraid of and responding with clarity, not platitudes. Until leaders take ownership of that, AI adoption will remain an exercise in anxiety management, not transformation.
The critical six leadership behaviors to succeed in the AI age
The pattern across every successful AI transformation is consistent. Effective leaders in today’s ambitious AI-first organizations practice six behaviors relentlessly. These behaviours are much more than mere soft skills, they are the leadership operating system that determines whether your AI investments deliver returns or stall in resistance.
Deep Listening
A global pharmaceutical company’s AI forecasts were off by double digits for months despite multiple rounds of model tuning. The problem was not technical; it was human. Field teams had noticed errors but stopped reporting them because their VP dominated every meeting and dismissed new ideas. When a new COO replaced the routine updates with one question, “What are we missing?”, the issue surfaced within days. Packaging suppliers had changed barcode formats, and the model had never been retrained to recognize them. The fix took 48 hours. Most executives would have launched another task force. She simply listened.
HFS’s extensive research with its OneCouncil members finds that firms with strong listening cultures make decisions significantly faster and with higher accuracy. The World Economic Forum ranks active listening among the top five skills for future leaders. Listening is not empathy theater; it is operational intelligence.
Leaders should begin meetings with “Tell me what I don’t know,” hold regular skip-level sessions, and limit their own talk time in problem-solving discussions. When front-line input drives process improvements each quarter, leaders are truly hearing what matters.
Uphold accountability
When an AI triage system misrouted urgent healthcare cases, one executive issued a direct internal message: “I approved this rollout too fast. Here’s what we’re fixing and how we’ll prevent it next time.” Trust rose immediately. Leaders who hide behind vendors or processes see repeat incidents climb. Those who take ownership see faster recoveries and stronger team confidence.
Accountability is not a communication tactic; it is the leadership signal employees read aloud. A simple three-sentence “Own It” framework works best: what happened, what I own, and what we will do next. When issues are acknowledged within 24 hours, teams respond faster and alignment returns quickly.
Model calm optimism
During an AI pilot kickoff, a CIO began with a moment of humor. “I asked ChatGPT to write this speech. It gave me 1,200 words of nonsense. Let’s learn together how to make it useful.” The laughter that followed broke the tension and unlocked genuine curiosity across the team. Leaders who admit uncertainty create psychological safety. Those who fake confidence lose talent. LinkedIn data reinforces that nearly nine in ten employees value trust in leadership over compensation.
Calm optimism is not naïve cheerfulness. It is clarity in uncertainty. The best leaders use a simple rhythm: here is what we know, here is what we do not know yet, and here is what we are trying next. When teams feel honesty, they stay engaged through change rather than fearing it.
Amplify others
A global bank’s CTO cut loan approval times by 60 percent and chose not to take the stage alone. At the next town hall, he invited the data engineer, compliance officer, and operations lead who built the solution to share how they did it. Collaboration between departments rose immediately.
Leadership amplification changes behavior faster than any governance rule. When people see peers recognized for cross-functional success, they start sharing data and expertise without being told.
Navigate styles for simplified communication
One VP of Operations created a one-page communication guide for her leadership team, listing preferred timing, channel, and decision style for each executive. She redesigned meetings to close with one decision, one owner, and one deadline. Average meeting length dropped dramatically. Most organizations do the opposite. They invite more people, skip agendas, and leave without clarity. The cost shows up in lost time, rework, and frustration.
Leaders who match their communication to how people work spend less time in meetings and more time moving forward. Every meeting that ends without a clear outcome adds interest to your leadership debt.
Seek feedback
A CFO ends each quarter with a reverse review, asking her team to rate her on clarity, speed, and decision quality. The first session was uncomfortable. By the third, it was transformational. By treating feedback as data, she created a continuous learning loop.
The World Economic Forum lists continuous learning and feedback literacy among the top three skills for 2025. LinkedIn’s research shows that 91 percent of employers now rank human skills as equal or greater in importance to technical expertise, yet fewer than 20 percent measure them. Asking for feedback is not weakness; it is model retraining for humans.
Bottom line: leadership is no longer about “soft skills”, but a systemic human upgrade to how we lead and drive teams
You can buy technology, restructure processes, and outsource data cleanup, but you cannot automate human maturity. Paying down leadership debt begins with six repeatable behaviors: hear deeply, uphold accountability, model calm optimism, amplify others, navigate styles and simplify, and seek feedback as fuel. These are not soft skills. They are the hard human system upgrades that determine whether AI investments create value.
The leaders who win the AI era will not be those who master neural networks. They will be the ones who master themselves. Leadership is not a byproduct of transformation. It is the precondition for it. The AI economy will be led by humans at the helm.
On Sunday, employees live the AI dream: frictionless, instant, empowering. They connect Gmail, Calendar, and OpenTable without asking permission. They fix mistakes, automate workflows, and see instant ROI on everyday tasks.
On Monday, they crash into enterprise reality: data silos, email chains, compliance debates, and governance frameworks that exist only in PowerPoint.
Your best employees are already AI augmented. Your enterprise is still forming committees.
This is your AI Velocity Gap, and it’s probably widening every day
Within the next 18 months, your employees will be working side-by-side with agentic AI, while your enterprise still debates policies and pilots. The AI Velocity Gap is the widening divide between how fast individuals are adopting AI to get work done and the speed with which enterprises are enabling it. It is the distance between human ambition and enterprise readiness for AI.
The AI Velocity Gap is no longer a concept. It’s the performance metric that decides whether you’ll lead the AI-first economy or be left behind.
You can’t close gaps that you can’t measure
The biggest failure in enterprise AI today is the inability to quantify progress. According to a new WalkMe survey, 78% of employees admit to using AI tools not approved by their employer, while only 7.5% have received extensive AI training. This isn’t rebellion. It’s your workforce solving problems faster than your IT department can write policies.
To reinforce this point, a recent HFS study of 545 global 2000 firms clearly shows that two-thirds of them are merely paying lip service to agentic AI, running low-level activities such as task automation (RPA) and copilot assistants, which we term as “agentic washing”:
This isn’t just semantic confusion. It’s strategic misdirection. When enterprises rebrand basic RPA as ‘agentic AI,’ they’re measuring the wrong outcomes and celebrating the wrong wins. Real agentic AI makes decisions, adapts to context, and operates with minimal human intervention. Task automation and copilots are table stakes, not transformation.
To understand the size of your AI Velocity Gap, start tracking three speeds that define your transformation
Speed 1: Adoption velocity. How fast are employees deploying AI tools compared with official enterprise initiatives? Count the number of unsanctioned tools in use, prompts executed per week, and AI workflows created outside IT control. Recent data shows 80% of SaaS logins for AI tools bypass IT oversight entirely.
Speed 2: Enablement velocity. How quickly your infrastructure supports AI-driven work. Measure API availability, time to access enterprise data, and how much of your process knowledge is actually documented. MIT’s Project NANDA found that only 40% of companies provide official AI subscriptions, yet 90% of employees use personal AI tools daily. When your sanctioned enterprise systems can’t compete with consumer ChatGPT, your infrastructure is the bottleneck.
Speed 3: Cultural velocity: How ready your leaders are to let AI make real decisions. Track executive sign-off cycles, experiment-to-production ratios, and how often AI output is used in live operations. Despite $30-40 billion invested in generative AI initiatives, only 5% of organizations see transformative returns (MIT’s Project NANDA). The other 95%? Still measuring activity instead of outcomes. In addition, recent research from HFS is showing how so many enterprises are failing to establish a positive culture when it comes to enterprise GenAI adoption. These measures expose where your enterprise is crawling while your people are sprinting. Nowhere is this more visible than in organizational culture:
Leadership Must Confront the Culture Crisis. Half of enterprise leaders are failing to drive a positive AI culture, and the data reveals why transformation stalls. HFS Research found that 45% of employees are either worried about job loss or resistant to change, while only 15% are genuinely positive about AI adoption. This isn’t a technology problem. It’s a leadership vacuum. When a quarter of your workforce fears for their jobs and another fifth actively resists GenAI due to disruption concerns, pilots will never scale to production.
Leaders who win build trust through transparency: they communicate how AI augments roles rather than replaces them, celebrate employees who co-create with AI tools, and reward outcomes over activity. The 15% who embrace AI as a driver of innovation aren’t lucky. They work for leaders who made a choice to lead with conviction instead of caution. The culture gap is closeable, but only if executives stop debating policies and start demonstrating that AI makes work better, not obsolete.
Turn AI experiments into AI operations
Once you’ve measured the size of your AI Velocity Gap, the next step is to close it with intent and precision.
Transform pilots into platforms. Stop running disconnected proofs of concept. One financial services firm discovered 27 unauthorized AI tools being used for zip code analysis in sales workflows. Instead of shutting them down, they built compliant data paths that preserved the productivity gains. Rebuild one business process entirely around AI and scale it. You will learn more from one operational success than from ten experiments.
Recast governance as enablement. Treat compliance like infrastructure, not red tape. In organizations that provide solid AI training and clear, open policies for using AI tools while ensuring data security, our research shows shadow AI adoption drops by 50% or more. Automate policy checks and build real-time audit trails so AI decisions can move as fast as your market.
Build an AI-first workforce. Every employee should learn to express intent through natural language tools. A study from AI4SP reveals that developers using AI coding assistants cut task times by 33%. Legal teams slash document analysis from 2 hours to 15 minutes. Content teams reduce their reliance on human translators by 90%. Train employees to think like workflow designers, not passive users. This is the new enterprise literacy.
Avoid the pitfalls that will seize your momentum
Governance bloat that drowns innovation in paperwork. You build approved vendor lists that expire before employees finish compliance training. By the time Legal signs off, your people have already found better tools.
Data delusion that mistakes “clean enough” for good enough. When it takes longer to configure your AI tool than to do the work manually, you don’t have an AI problem. You have a data architecture problem that no amount of vendor demos will fix.
Talent inertia that protects comfort zones over curiosity. 40% of enterprises lack adequate AI expertise internally (Stack-AI, 2025). Worse, 31% of employees, including 41% of Gen Z, admit they’re actively sabotaging your AI strategy by refusing to use the tools you bought (Writer/Workplace Intelligence, 2025). They’re not confused. They’re voting with their feet.
Pilot fatigue that celebrates activity instead of adoption. 75% of enterprises remain stuck in pilot mode, unable to reach scale or full adoption (HFS Research, 2025). The gap between the 25% who succeed and the 75% who don’t isn’t technical capability. It’s leadership conviction.
Every one of these is a symptom of leadership fear, not technical limitation.
Create momentum that employees can feel
Your teams already see AI’s power in their daily tools. Harness that energy instead of suppressing it.
Launch internal Agent Labs where cross-functional teams can safely experiment with enterprise data. A technology company preparing for an IPO discovered an analyst using personal ChatGPT Plus to analyze confidential revenue projections under deadline pressure. The risk wasn’t the tool. It was the lack of a safe, sanctioned alternative.
Celebrate visible wins, not vague ambitions. Show hours saved, errors reduced, customers served faster. Make success tangible and repeatable. The enterprises that publicize their AI champions and their results create permission structures for broader adoption.
AI success is not just about tools, it is about teaching your people how to think and build with them. Gallup finds that employees who receive formal AI training are 89% more likely to view AI as highly productive and beneficial to their work. Boston Consulting Group reports that companies providing at least five hours of AI training and in-person coaching see far greater adoption and workflow redesign success. Training converts skeptics into champions faster than any pilot program.
Ignoring the AI Velocity Gap does not protect you, it compounds your vulnerability
Enterprises that ignore their AI Velocity Gap are not standing still; they are moving backward. The gap compounds silently every quarter, widening the distance between individual innovation and organizational inertia.
Talent walks. Skilled employees already using AI to accelerate work will leave for employers that recognize and reward their capabilities. A recent PwC study found that 52% of Gen Z workers would quit a job that limits their use of AI tools. AI fluency has become a career currency, not a novelty.
Shadow AI becomes shadow operations. When enterprises fail to provide secure, sanctioned AI environments, employees build their own. Sensitive data flows into public models, audit trails disappear, and compliance teams lose visibility. The result is not just risk, it is a fragmented operating model that no one controls.
Customers feel the lag. Competitors that integrate AI into customer support, sales, and service workflows will deliver faster, cheaper, and more personalized experiences. HFS predicts that 75% of customer interactions will involve some AI agent in the next 12 months. Firms that cannot match that pace will lose relevance, not just revenue.
Leaders lose credibility. Boards and investors no longer see AI as optional. In Q2 2025 earnings calls, more than 60% of S&P 500 CEOs mentioned AI as a top strategic lever. When executives keep “piloting” instead of delivering, they signal indecision, not prudence.
You build enterprise debt instead of enterprise value. Every delayed AI initiative adds layers of process debt, data debt, and talent debt that become exponentially harder to repay. The enterprise becomes structurally slower even as the market speeds up.
Bottom Line: The AI Velocity Gap is a leadership test, not a technical one
Technology isn’t holding you back. Leadership courage is. The firms that win the AI-first decade will not be the ones that perfect policy or vendor selection. They will be the ones that measure progress, reward experimentation, and move faster than their fear.
Your people are already crossing the AI chasm. The only question left is whether your enterprise has the conviction to follow.
We’re thrilled to welcome Achyuta Ghosh to HFS as our new Executive Research Leader for GCCs and BPO, based in Delhi.
Achyuta joins us from Nasscom, where he made his name building its research practice from the ground up and shaping industry perspectives on AI, cloud, SaaS, GCCs, and the Future of Work. We have enjoyed working with Achyuta over the years as a research partner, so this is a great natural progression in his analyst career. Achyuta brings 23+ years of experience having worked at WNS, Genpact, Ford Motor Company, and Frost & Sullivan in his earlier career.
Am personally excited for Achyuta to bring his expertise, energy, and personality to the HFS community. So let’s hear from him directly about his new career with HFS and what to expect from him donning purple…
What excites you most about joining HFS at this point in your career?
Honestly, what excites me about joining HFS right now is the chance to dive straight into what is transforming technology and business. I’ve experienced this industry from multiple lenses as a practitioner, a catalyst, and an analyst, and right now, the speed and scale of change are truly staggering. Companies will not make small adjustments; they will need to tear down old models, turn services into software, and move beyond the typical “digital transformation” mantra.
It’s a full-blown business reinvention, and HFS is right at the center of all this. What really attracts me is their mix of independent thinking, deep expertise, and, frankly, a willingness to work hand in hand with their clients.
The HFS team isn’t just writing reports from the sidelines. They collaborate directly with enterprises through workshops, events, and roundtables to turn insights into real results. That’s exactly the kind of impact I want to make. In a landscape packed with vendor-driven rankings and buzzwords, HFS keeps it real. They’ve built a reputation for being direct and honest, which matches how I want to approach my work.
HFS covers many areas for its size, but at its heart, it’s a startup that moves fast into futuristic themes, such as leading the services-as-software narrative. That blend of expertise and openness to what’s next is the environment where I know I can contribute and keep growing.
You’ve spent years tracking GCCs and BPO. What shifts are you seeing that leaders should pay attention to right now?
The game has changed for GCCs and BPOs. Leading GCCs are stepping up as centers for digital, analytics, and AI, playing a direct role in enterprise strategy and even feeding into local innovation ecosystems. The top performers now have more autonomy and are much more tightly integrated into the business.
It’s the same story with BPOs. They’ve shifted from executing tasks to actually owning outcomes, ramping up automation, and building specialized platforms for micro verticals. The focus is on reskilling talent, making compliance a priority, and doubling down on trust and sustainability.
For leaders, this means it’s time to rethink how they invest in and manage GCCs and orchestrate talent, tech, and partner strategy. The old lines between in-house and outsourced are fading, so it’s crucial to understand what stays internal versus what goes outside while finding new ways to collaborate. And this is not a one-size-fits-all strategy. Each organization will need to do it in a way that works best for them. Automation and AI are rewriting the rulebook on service delivery and business models, which means new governance, new metrics, and a sharper focus on business value.
Let’s not forget: keeping teams motivated, reskilling and upskilling them, and building a strong culture is more important than ever, especially in these hybrid, distributed setups. Embedding sustainability and ecosystem partnerships into long-term strategy is quickly moving from “nice to have” to “must-have.”
Where do you see the most significant opportunities for GCCs and BPO firms in the AI era?
The reality is that everyone talks about moving up the value chain, but less than 10% of GCCs have actually made the leap to become true transformation hubs. That’s the opportunity: to close the gap between possibilities and reality.
AI is the game-changer. It lets GCCs and BPOs redesign service delivery. Embedding generative AI and automation opens the door to scale up high-value domains like risk, compliance, customer insights, and product design so that you are helping shape and grow business.
The big opportunity is to step up as strategic partners, driving AI initiatives, scaling innovation, and connecting the dots across platforms, providers, and talent. That’s how GCCs and BPOs can help clients navigate disruption, unlock new growth, and lead in this AI-driven era.
How do you plan to shape HFS’s research agenda in this space?
First, a disclaimer: The research focus will flex and shift as we learn more about customer needs, but our main goal will always be to help enterprises get the most out of their GCC and GBS strategies.
While GCCs have scaled up quickly in the past few years, I believe there is tremendous headroom for further sector growth. With the landscape changing fast (talent pools shifting, costs fluctuating, ecosystems evolving), the old playbooks no longer cut it. The global location strategy, talent demand, and modernizing service delivery models become crucial to transform GCCs into genuine value creators. And for enterprises, that presents both a challenge and an opportunity. The goalposts have shifted. Saving money or being more efficient is hygiene, while driving innovation, building IP, and taking on productization is in focus.
AI is at the top of my mind, too. Everyone knows AI will fundamentally change how business runs, but there’s a big gap between experimenting and scaling AI-first operations. A small share of GCCs are actually AI-centric. The challenge now is figuring out how to implement models like services-as-software, generative AI, or even emerging approaches like vibe coding in a way that truly changes delivery. And, just as importantly, leaders need to assess readiness, build robust governance, and ensure AI is used responsibly across the organization.
Finally, there’s the matter of value and partnerships. Enterprises need clear ways to measure what they expect from their GCCs and compare it to the delivered value. With the line between what GCCs and service providers deliver getting fuzzier, companies have to find smarter ways to structure partnerships so both sides complement each other for optimum impact.
What unique perspective do you think you bring to analyzing this industry?
I have mentioned this earlier. Here’s what sets me apart: I’ve seen this industry from all sides: analyst, practitioner, and catalyst.
As an analyst, my years of tracking shifts in tech at an industry level mean I know how to connect what’s happening on the ground to the big-picture strategy. As a practitioner, I’ve actually built analytics products and built tech-centric businesses. So, I will want to look at what’s practical versus what’s just hype. As a catalyst, my time working with a tech trade body means I get to know how ecosystems work: talent, innovation, stakeholder management, and partnerships. That mindset is what can help GCCs truly scale their impact.
Who has been the biggest influence on your career so far, and why?
Honestly, it’s been a mix of great mentors at every stage. Early on in my analyst career, I learned the importance of execution and discipline. Later, I was pushed to think bigger and look at the long-term impact. I’ve picked up lessons from experienced clients at WNS and Genpact, sharp analysts at Frost & Sullivan, global leaders at Ford Motor Company, and some of the top minds at Nasscom (both internal and external member organizations).
Frost & Sullivan leaders drilled real research discipline into me. My clients showed me the value of looking at problems from every angle. At Ford, my role focused on fostering an entrepreneurial mindset. And Indian tech services industry leaders at Nasscom helped me develop the ability to anticipate trends and explain what’s coming next. Put all that together, and I like to move fast, focus on asking questions, backing up insights with evidence, and never losing sight of stakeholder needs.
Outside of research, what keeps you busy or inspired? Any hobbies, passions, or quirks the HFS community should know about?
I’m an avid reader, especially when it comes to war history. Strategy, leadership under pressure, the cost of decisions, and resilience are lessons that apply just as much in business. I’m also a diehard car and motorcycle enthusiast, and I have travelled around the country. There’s nothing like a long road trip to remind you that detours often lead to the best discoveries.
Outside of that, I’m always up for trying new food. I’ve run a movie review page in the past, and cricket’s been a passion of mine for years.
Fun fact: I tend to connect with dogs faster than with most people.
And finally, if you had to describe yourself in three words, what would they be?
Three words? Curious, resilient, analytical.
Curious because, honestly, I never stop asking questions. Whether it’s diving into new markets or figuring out what makes a team tick, I’m always looking for the “why” behind things.
Resilience has shaped my career, from navigating industry disruptions to managing personal setbacks. It has also made me more adaptable, helping me persist through uncertainty and stay optimistic about the future.
Analytical, absolutely. I love connecting the dots, spotting patterns, and building a story out of the numbers.
The revelation that Deloitte submitted a government report filled with AI-generated fake references and fabricated court quotes is not just embarrassing – it is a $290,000 lesson in what happens when professional judgment is replaced by blind trust in AI.
Australia’s Department of Employment and Workplace Relations said Deloitte will return part of the AU$440,000 fee after errors including a fabricated court quote and non-existent references were uncovered by academic Chris Rudge. A corrected version disclosed use of Azure OpenAI GPT-4o after the scandal broke. AI without verification is not innovation. It is professional malpractice waiting to happen.
GPT-4o did not malfunction. Deloitte’s process did.
Deloitte was hired to review a welfare compliance framework and IT system. The report went live, and a single diligent reader exposed fake sources and a bogus court quote. Deloitte then refunded the final payment and disclosed its generative AI use after the fact.
The model did not fail. It produced fluent, plausible text exactly as designed. What failed was process and accountability. Someone generated content, skipped verification, and submitted it to a government client whose decisions affect millions of citizens and billions in welfare payments. The stakes were too high for shortcuts, yet shortcuts were taken.
Enterprise leaders already know the risks, but they’re buying the services anyway
The irony is almost painful. When HFS Research surveyed 505 enterprise leaders across Global 2000 firms in 2024, 32% identified “risk of inaccurate or unreliable outputs, including potential for AI hallucinations” as one of their top concerns when engaging professional services that use AI in delivery. That’s nearly one in three buyers explicitly worried about the exact problem that just cost Deloitte a contract and its reputation:
Yet those same enterprises keep signing deals with firms racing to automate their deliverables without building verification into workflows. The Deloitte scandal isn’t revealing a hidden risk, it’s confirming what enterprise leaders already feared. Even more telling, 44% cited lack of transparency in AI-driven decisions as their top concern, and 28% worried about limited accountability for AI-related errors. The market knows the problem exists. The difference now is that Deloitte’s $290,000 refund puts a price tag on ignoring it. When nearly half of your potential clients are already worried about whether you’re being transparent about AI use, hiding GPT-4o in your methodology until after you get caught isn’t just bad practice, it’s commercial suicide.
Buyers ranked “ability to balance AI with human expertise” as their fifth most important selection criterion
When HFS Research asked 1,002 enterprise leaders across Global 2000 firms what matters most when selecting an AI-powered consulting firm in 2025, the results expose exactly where Deloitte failed. “Ability to balance AI with human expertise” ranked fifth out of ten criteria, sitting between proprietary IP differentiation and track record of delivering outcomes:
This isn’t a nice-to-have buried at the bottom of the list. It’s a top-five dealbreaker. Yet Deloitte’s approach to the Australian welfare report suggests they treated AI as a replacement for human judgment rather than an amplifier of it. The ranking also reveals something critical about buyer expectations: deep industry expertise still matters most, but the ability to use AI responsibly is now more important than customization, change management capabilities, or vendor ecosystem collaboration. Enterprise leaders aren’t rejecting AI in professional services. They’re demanding that firms prove they can deploy it without sacrificing the human insight they’re paying premium rates to receive. Deloitte’s scandal shows what happens when a firm optimizes for speed and margin while ignoring the one thing clients ranked in their top five priorities.
Your vendors are using AI right now whether you know it or not
If you buy consulting, strategy reports, audits, or any expertise-driven service, assume AI is already in your supply chain. The question is not if vendors are using it, but how responsibly they are doing so.
Make AI disclosure non-negotiable in every contract starting today. Every agreement must spell out which tools are used, for what purposes, and how verification occurs. “We use GPT-4o for initial drafts, followed by human fact-checking” is accountability. “AI-assisted workflows” is a loophole.
Verify before you act on any deliverable that matters. High-stakes work requires qualified human review of every claim and citation. Do not assume plausibility equals truth. Deloitte was caught by one diligent academic. How many unchecked reports are sitting in your systems right now?
Rewrite acceptance criteria because your current standards assume human work. Add explicit checks for fact accuracy, citation integrity, and full AI disclosure to every statement of work before you sign it.
Create escalation protocols before the next crisis breaks. When a fabricated quote surfaces, who investigates? Who notifies the client? How do you remediate within hours, not weeks? Deloitte’s response was reactive PR. You need prevention built into operations.
The race to automate is hurting service provider credibility
One unverified deliverable cost Deloitte both money and trust. The economic temptation is obvious: use AI to draft faster, bill the same, and pocket the margin. But that margin gain is being bought with a credibility deficit that compounds with every careless report.
This is the dark side of the Services-as-Software era. AI can enable services to behave like scalable platforms, but that only works when the underlying workflows are validated, explainable, and consistently coded for quality. Without these controls, Services-as-Software collapses into Services-as-Spin.
Make verification mandatory for every single AI-assisted output. Every piece of content must undergo human expert review before it leaves your building. Treat verification as a professional obligation, not an optional cost.
Default to transparency because clients will find out eventually. Discovery happens through audits, detection tools, or leaks. Early disclosure builds trust. Concealment destroys it permanently.
Separate creation from review immediately. The person prompting AI cannot be the same one validating its results. Fresh eyes catch errors invisible to the drafter who anchored on what they expected.
Price for integrity, not just AI-enabled margin expansion. If AI improves efficiency, reinvest some savings in stronger validation. Competing on AI-driven speed while starving quality control is reputational suicide.
Most AI transformation programs fail because they optimize for speed over verification
This problem is now systemic across sectors. New York lawyers were sanctioned in Mata v. Avianca after filing briefs with non-existent cases generated by ChatGPT. UK High Court judges have warned lawyers that citing fake AI cases can trigger contempt referrals. Air Canada was held liable after its website chatbot gave a passenger false policy guidance. Media outlets including CNET and Sports Illustrated faced backlash and corrections for AI-generated content riddled with factual errors and fake bylines. Academic publishers have retracted thousands of papers amid papermill and AI-fabrication concerns, with Wiley confirming over 11,000 retractions tied to Hindawi and Springer Nature retracting a machine-learning book after fake citations were exposed.
LLMs can be transformative when they amplify human expertise. Deloitte’s failure was not using AI. It was abdicating accountability by treating AI as a substitute for analysis rather than a partner in it.
This is exactly where Vibe Coding matters. Enterprises that succeed with AI are already teaching their people to code the “vibe” of quality into every workflow. That means aligning how data is validated, how context is shared, and how collaboration flows across the OneOffice. You do not scale trust with technology. You scale it through consistent cultural coding of how technology is used.
The fastest results come from fixing one broken process at a time
Do not try to govern AI everywhere at once. Start where the blast radius is biggest.
As our recent research across the Global 2000 reveals, the issue with AI transformation isn’t the tech, but the archaic processes that are failing to create better data to make decisions. It’s also the failure of leadership to train their people to rethink processes and be aware of the real business problems they are trying to solve. While so many stakeholders obsess with technical debt, the real change mandate is to address process, data and people debt to exploit these wonderful technologies:
For enterprises: Pick one high-stakes category such as government reports, regulatory filings, audit outputs, or financial models. Build airtight disclosure and verification there first, then scale the approach.
For service providers: Target practices using AI heavily. Make documented verification an essential mandate to client delivery. Track error types and use them to improve prompts, retrieval methods, and quality checklists continuously.
The leaders separating progress from scandal are those who embed quality control at the start, not those scrambling to bolt it on after public failure.
Regulatory crackdowns are coming and 60% of firms have no AI governance plan
Courts are already adjusting. US judges have begun issuing standing orders that require lawyers to certify whether filings used generative AI and to verify any AI-drafted text. The UK High Court has warned that submitting fictitious AI-generated case law risks contempt or referral to regulators. At the policy level, NIST has published its Generative AI Profile as companion guidance to the AI Risk Management Framework with concrete control actions organizations can adopt now.
Governments burned by AI blunders will introduce binding standards. Professional associations will issue mandatory guidelines. Clients will add AI clauses to every contract with real penalties. The firms that move now, investing in transparency, training, and verification, will win trust and market share. The rest will be litigating their way through the next cycle of embarrassment.
We are in a period where AI capability has outpaced corporate discipline. Old QA checklists miss AI-specific failure modes like hallucinations and fake citations. Old pricing models ignore the real cost of verification overhead. Old disclosure norms hide behind marketing language that protects no one.
Bottom line: AI without verification is outsourcing judgment to a system that confidently invents facts.
Deloitte’s scandal is not an outlier. It is the first major warning shot of a much larger credibility crisis coming for every industry. The shift to Services-as-Software and Vibe-Coded enterprises is about replacing legacy human-only workflows with intelligent, accountable, and transparent ones that combine machine efficiency with human integrity. Build this discipline into your operating model now or explain the next scandal later when your name hits the headlines.
Within three years, two-thirds of Global 2000 enterprises intend to replace human-heavy IT and BPO services with AI-driven delivery. At HFS, we are terming this Services-as-Software (SaS) and view the rapid progress of AI Agents, Large Language Models (LLMs) and, ultimately, Vibe Coding as the three technological catalysts to make this happen:
Why Services-as-Software will render many traditional services and software providers obsolete
Services-as-Software (SaS) is the fusion of software and services into AI-powered, outcome-driven platforms that continuously learn and adapt. SaS replaces static SaaS and labor-heavy consulting with autonomous digital service layers that deliver expertise and execution in real time. SaS will eventually render traditional labor-based professional services and traditional SaaS providers obsolete, replaced by scalable AI Agents that deliver outcomes, not hours or licenses.
SaS is an emerging enterprise model where human-delivered services are redesigned as intelligent, automated, and continuously adaptive software entities. Instead of buying static SaaS licenses or paying for labor-intensive services, enterprises consume AI-native service layers that blend automation, reasoning, and execution into outcome-based solutions.
SaS delivery will be accelerated by LLMs, orchestrated by Agentic AI, and produced by Vibe Coding
Services scaled in the past by combining talent with common tech platforms. In the SaS era, the same principle applies, but talent now needs deeper business context to unlock the value of common AI platforms.
Vibe Coding provides speed and intent, but it is the fusion with LLMs and Agentic AI that makes SaS sustainable:
LLMs are the accelerator. It automates content, code, and workflow generation. Customers are already using it to shrink delivery cycles, generate reusable IP, and cut the cost of software testing.
Agentic AI is the orchestrator. Multi-agent systems manage tasks, test outputs, retrain models, and monitor compliance. We see banks piloting agent-based compliance checks and insurers using them to orchestrate claims processing without armies of analysts.
Vibe Coding is the production engine. It anchors intent-driven builds that can be refined, secured, and deployed like products. Large retailers are now expecting working demos within days of an engagement, driven by Vibe Coding copilots.
Together, these three AI constituents are replacing traditional FTE billing with subscription-based services, predictable costs, and outcome-linked value:
SaS blurs the line between software and services to form a $1.5 trillion industry
Like services, SaS delivers expertise and decision-making. Like software, it is automated, scalable, and subscription-based. But unlike either, it is dynamic, self-learning, and outcome-driven. This new category will absorb spend from both traditional SaaS and IT services, creating a $1.5 trillion market over the next few years, where enterprises stop paying for headcount or static tools and instead subscribe to AI-powered, adaptive outcomes:
Vice Coding is a new programming style that emphasizes rapid, intuitive and low-ceremony development. Developers and business stakeholders co-create with AI copilots through natural language, moving directly from intent to working code. It’s fast, conversational, and adaptive, designed for a world where software and services are fusing into dynamic, AI-driven outcomes. At HFS, we believe Vibe Coding will become the production engine behind the emerging Services-as-Software model, and we won’t even be calling it “vibe coding” in the future. It will all be about writing syntax to frame problems and design solutions. Let’s investigate further…
Vibe Coding offers the opportunity to realize the HFS OneOffice vision
For decades, business and IT have operated in silos — business leaders drafting requirements, IT translating them months later into code. This gap has slowed innovation and reinforced the divide between the front, middle, and back office.
Vibe Coding offers the opportunity to realize the HFS OneOffice vision. By enabling business stakeholders and developers to co-create with AI copilots in natural language, it collapses the wall between business intent and technology execution. Instead of handoffs, enterprises move in real time from idea to outcome. This is where business and IT finally come together as OneOffice: a unified, adaptive enterprise where technology and talent co-orchestrate value creation.
Those enterprises that simply think they can bolt on agentic technologies to their existing processes are quickly learning that this adds minimal value. It is like bolting a Tesla battery pack onto a lawnmower, where you can brag about the tech, but it will not cut the grass any faster. To gain the maximum benefits from AI technologies, business executives must work closely with their IT counterparts to design processes that generate the right data, make smarter decisions, and train people to use the technology effectively. The way we work is changing, both in terms of how processes function and how our roles need to broaden, as so many of our current tasks are improved or even replaced by AI.
Vibe Coding is not a lab experiment
Vibe Coding emphasizes speed, intuition, and iteration over rigid, process-heavy development. By reducing dependence on upfront design and exhaustive documentation, Vibe Coding enables teams to move directly from intent to working code in a conversational, adaptive way that aligns with the fast, fluid needs of modern enterprises.
Start-up funding organization Y Combinator reported that a quarter of its Winter 2025 startups had codebases that were 95 percent AI-generated. Production-ready components are now being built in hours instead of weeks. Governance, compliance, and security can be embedded into the codebase from the outset. Services, like software releases, can be built once and reused across multiple customers instead of bespoke projects. The consequences for service providers are profound.
Vibe Coding is the production engine that turns Services-as-Software from vision to reality
Services operated in the past by combining talent-at-scale with common tech platforms. In the SaS era, the same principles apply, but the talent needs deeper business context to unlock the value of common AI platforms.
Vibe Coding provides speed and intent, but it is the fusion with LLMs and Agentic AI that makes SaS sustainable:
LLMs are the accelerator. It automates content, code, and workflow generation. Customers are already using it to shrink delivery cycles, generate reusable IP, and cut the cost of software testing.
Agentic AI is the orchestrator. Multi-agent systems manage tasks, test outputs, retrain models, and monitor compliance. We see banks piloting agent-based compliance checks and insurers using them to orchestrate claims processing without armies of analysts.
Vibe Coding is the production engine. It anchors intent-driven builds that can be refined, secured, and deployed like products. Large retailers are now expecting working demos within days of an engagement, driven by Vibe Coding copilots.
Together, these three AI constituents are replacing traditional FTE billing with subscription-based services, predictable costs, and outcome-linked value.
Enterprises mustn’t approach SaS as another outsourcing wave.
SaS is a new operating model and ambitious enterprise customers now expect working demos early in engagements, are exploring outcome-based contracts, and demand transparency on AI governance and intellectual property. This is the SaS vision coming to life in real time.
Smart enterprise leaders must stop measuring value in FTE counts and start anchoring contracts to speed, reuse, and reliability. That means asking providers for subscription-style pricing and demonstrable reuse of code and IP, not endless custom builds. It also means insisting on AI governance frameworks that explain how models are trained, how code is validated, and how intellectual property is protected.
CIOs must pivot their own talent strategies too. Developers and architects need to work with Vibe Coding and agentic systems rather than compete with them. Enterprises that invest in prompt engineering, AI-era architecture oversight, and code validation will extract the most value. Those that do not risk being locked into black-box services they cannot control or trust.
A global bank recently shifted from a traditional outsourcing contract to a SaS model for customer onboarding. Instead of hundreds of developers coding workflows, the provider now delivers an AI-powered onboarding service on subscription. Vibe Coding enables rapid iteration of new compliance checks, GenAI auto-generates the documentation, and Agentic AI monitors process accuracy. The bank gets faster releases, lower costs, and auditable governance, without the FTE treadmill.
Is vibe coding ready for prime time? Or stuck in lab?
To test the waters, we recently ran a LinkedIn poll asking: “Will enterprises adopt Vibe Coding?”
The results highlight both momentum and hesitation. The majority clearly see Vibe Coding as inevitable, drawn by its speed, adaptability, and ability to turn ideas into working code in days instead of months. Younger developers in particular are energised by its low-ceremony, conversational style, which plays to their comfort with AI-first tools.
But there is a tale of caution in the poll as well. Those who view Vibe Coding as risky because of:
Governance & compliance gaps. Regulators and enterprises worry about how to audit code that is 95% AI-generated.
Black-box outputs. Without explainability, enterprises fear vendor lock-in and an inability to validate outcomes.
Talent disruption. Senior developers may resist low-ceremony, AI-first practices that threaten traditional roles.
Security concerns. Copilots and LLMs trained on broad datasets raise questions about vulnerabilities and IP leakage.
Cultural inertia. Shifting from documentation-heavy processes to conversational coding requires a new mindset that not all enterprises are ready for.
These concerns don’t negate the momentum, but they underline that adoption will depend on embedding governance, explainability, and trust at the core of Vibe Coding practices.
Recommendations for Enterprise Customers
Shift contracting models. Move from FTE billing and static SaaS licenses to subscription-style, outcome-linked services.
Invest in talent. Build new skills in prompt engineering, AI-era architecture, and code validation. Encourage younger talent to lead experiments with Vibe Voding, since they adapt fastest to conversational, AI-first ways of building.
Demand transparency. Require providers to demonstrate AI governance frameworks, data lineage, and intellectual property protection.
Push for reuse. Ask providers to deliver modular service components that can be reused across the enterprise, not bespoke one-offs.
Pilot fast, scale faster. Expect working demos in days, not months, and measure providers on speed, reuse, and reliability.
Recommendations for Service Providers
Retire the labor pyramid. Replace headcount-heavy delivery with AI-first, productized service agents built through Vibe Coding.
Embed governance at the core. Bake compliance, security, and auditability into AI services from the outset.
Industrialize Vibe Coding. Make copilots standard for all delivery teams and use young developers as the frontline to accelerate builds and generate reusable IP.
Rewire pricing. Shift to subscription-based models that monetize outcomes, not hours.
Partner widely. Team with hyperscalers, LLM vendors, and AI-native startups to co-create SaS offerings.
Recommendations for Traditional SaaS Firms
Move beyond licenses. Static SaaS will be cannibalized. Pivot toward adaptive, AI-powered service layers that evolve continuously.
Fuse with services. Collaborate with service providers to create co-delivered SaS platforms.
Embrace Vibe Coding ecosystems. Open your platforms so developers, especially younger talent, can use AI copilots to extend and customize products in real time.
Differentiate on trust. Put governance, privacy, and explainability at the center of your value proposition.
Accelerate open platforms. Build marketplaces where Vibe Coding, Agentic AI, and LLM-powered agents extend your applications with speed and creativity.
Bottom line: Embrace vibe coding. Don’t fear it.
Vibe Coding is the production engine that makes Services-as-Software real, and it will decide the winners of the next decade. Enterprises can no longer afford to treat AI as bolt-ons or outsourcing-lite. SaS is a new operating model where AI agents, LLMs, and Vibe Coding collapse the gap between software and services, shifting the economics of IT from people and licenses to reusable, outcome-based digital service layers.
The real risk isn’t that Vibe Coding will fail; it’s that enterprises will fear it and do nothing, clinging to incremental improvements that are only slightly better, faster, or cheaper. Those who adapt now will own the future $1.5 trillion SaS market. Those who don’t will be stuck optimizing the old world, while others reinvent the new one.
To conclude, Vibe Coding is a new mindset that energizes young talent and accelerates the shift from human-run services to software-run outcomes. Enterprises, providers, and SaaS firms that embrace this culture will define the $1.5 trillion SaS market.
The future of the analyst industry is here, but most of its stakeholders are simply not ready. You only need to see the stock price carnage of Gartner and Forrester to realize the analyst and advisor industry are in grave danger of being runover by AI.
The Futurum Group CEO, Daniel Newman, and I take a hard look at the future of the analyst industry in the era of ChatGPT-5 and agentic AI.
We discuss how AI is dismantling legacy models built on slow inquiry processes, paywalled reports, and expensive AR programs, replacing them with instant, on-demand insights. Analyst firms, especially the large, entrenched players, must reinvent themselves fast, shifting from rear-view research to forward-looking influence, proprietary data, and authentic personal brands.
➡️ Our conversation covers:
*Why AI is making traditional analyst deliverables (Magic Quadrants, long reports) less relevant
*The need for speed, authenticity, and personality to stand out in a market drowning in AI-generated content
*How smaller, nimble firms can outpace large incumbents by moving faster and building direct influence
*The decline of AR’s traditional role as a concierge between vendors and analysts
*The genuine risk of irrelevance if the industry fails to adapt within 18 months
Bottom line: The analyst business has arrived at its “Blockbuster moment.”
Adapt quickly, embrace AI, and build genuine influence or be replaced by faster, cheaper, and better alternatives. Enjoy!
Layoffs are accelerating again, but this time AI is becoming the cover story, offering a convenient narrative to justify long-delayed changes under the guise of technological progress.
Microsoft, Amazon, Citigroup, UPS, Google, McKinsey, Deloitte, PwC, and many others have all recently laid off staff under the AI smokescreen. The headlines say it is about automation and AI readiness, but that is not the whole story… not even close.
What we are seeing is not just automation-led efficiency, it is a structural shakeout triggered by board pressures to cut costs, eliminate underperforming middle layers, and move away from legacy talent strategies. The corporate world has also experienced high-wage fatigue, where many staff have had significant wage growth, especially since the inflationary pandemic years, and it’s simply very expensive to maintain staff on these high salaries and other benefits.
Many of these firms have been waiting for an excuse for years to trim their fat, and now they have it. We will, however, give some credit to TCS’s CEO Krithi, who positioned their recent “restructuring initiative as being aimed at transforming the company into a future-ready organization.” At least there is some admission here that many staff at mid-senior levels were no longer delivering value in a challenging market environment, and it was time to trim the fat. Not one mention of AI…
You can run from your past, but it will catch up with you if you can’t change your habits
Companies are making moves they have postponed for years. Cuts labeled as future-proofing are often strategic resets that should have happened long before AI showed up. Yes, some AI deployments are proving valuable. But not at the scale required to displace tens of thousands of roles overnight.
In fact, we were having exactly the same conversations when RPA was hyping markets a decade ago, but the technologies couldn’t scale up and deliver like what we are witnessing with GenAI and agentic. It is easier to blame emerging tech than to admit to dysfunctional processes, poor-quality data, bloated hierarchies, poor skills development, or misaligned workforce structures.
Recent research (below) clearly shows that the issues plaguing major enterprises in achieving their goals are not technology constraints but all the non-technical areas that are blocking the ability to exploit AI:
Leaders need to stop pretending all of this is an AI transformation. This is not an overdue cleanup… it is a premature dismantling of work structures without the foundations for what comes next. What is clear is that you will fail with AI if you do not focus on your processes, data, and people first.
Cutting early-career talent creates long-term fragility and trashes your culture
Some of the clearest signals are coming from the firms that once defined the pyramid talent model: the big four. Deloitte, EY, PwC, and KPMG have sharply reduced graduate recruitment, with cuts as high as 44 percent compared to last year. At the same time, mid-level roles are being protected, senior compensation is climbing, and administrative tasks are being offshored to lower-cost hubs.
The rationale? AI can automate a lot of entry-level work. But AI is not ready to own this work at scale, and many of these roles were not just about task execution, they were about long-term capability building. Cutting them removes a foundational layer of growth, learning, and leadership development. It weakens succession pipelines, institutional knowledge transfer, and creates brittle organizations with no buffer to absorb future shifts.
This pattern is not limited to the big four. The US government is accelerating generative AI pilots while significantly cutting civil service positions. Media organizations like Business Insider are adopting AI-first strategies while laying off large portions of their newsrooms. B2B companies are reducing headcount in marketing and sales functions in anticipation of productivity gains that have yet to fully materialize.
Yet few of these decisions are supported by robust evidence of AI delivering sustainable value at scale. Instead, many are driven by cost-cutting mandates, simplification goals, and boardroom pressure. AI is being positioned as a convenient explanation for broader organizational shakeouts.
Crucially, early-career roles are not collateral damage, they are a deliberate target. Firms are pulling back on graduate and entry-level hiring, assuming that AI will render those jobs unnecessary. But AI can only automate fragments of work, not own entire workflows. These junior positions were never just task execution lanes—they were foundational to future leadership development, capability building, and institutional continuity. Eliminating them puts long-term organizational stability at risk. We are not just trimming headcount, we are erasing the scaffolding of future expertise.
Smart enterprise leaders are leaning into both young talent and emerging AI opportunities together
Enterprise leaders should proactively invest in young talent by aligning graduate recruitment with evolving skill requirements, emphasizing continuous learning, and developing pathways that enable graduates to work creatively and effectively with AI tools. Lean into both by creating roles that complement AI—focusing human effort on critical thinking, creativity, ethics oversight, process and systems governance, and innovation management.
The US, for example, has several robust training initiatives underway to support AI workforce development. These include NSF-funded National AI Research Institutes focused on sector-specific skills, the Department of Labor’s AI Apprenticeship Program emphasizing practical AI training, the Department of Commerce’s AI Centers of Excellence facilitating industry partnerships, and Workforce Innovation Grants aimed at boosting AI education in community colleges and regional institutions.
The work has not been redesigned, only reduced
Here is the real risk. Enterprises are shrinking their workforce without reshaping the work. The assumption is that AI will simply fill in the gaps. However, only 12 percent of organizations report a somewhat mature level of AI readiness, and most of them acknowledge that they have a long way to go (see exhibit below). That makes the scale of workforce cuts hard to justify on the basis of actual deployment.
Most of the AI used today still relies on human orchestration, supervision, and refinement. Few enterprises have established clear handoffs between agents and people, and even fewer have rearchitected workflows to reflect new levels of autonomy or human-AI collaboration:
We are clearing out talent faster than we are designing the next delivery model, and that creates exposure, inconsistent outcomes, and over-reliance on immature systems. Fragile operating models are held together by duct tape, not by design.
This is not the AI revolution most leaders say they are preparing for. It is a pause on hiring disguised as foresight. A reversion to old cost takeout habits, rather than a step toward adaptive work models.
Where enterprise leaders now need to focus
If you are an enterprise leader watching this unfold, it is time to move beyond reactive cycles. Here is where to focus instead:
Map the work, not just the roles. Understand what outcomes your teams are responsible for and where AI can assist rather than own it. Decompose the work before deciding who or what should do it.
Stop hollowing out your future talent. Reducing early-career roles may create short-term savings but also destroy long-term agility. Invest in hybrid learning environments where new talent can collaborate with AI systems and senior mentors.
Redesign for orchestration. Do not just implement AI tools. Build systems around them that define how work is triggered, handed off, evaluated, and evolved. Think beyond productivity into reliability and resilience.
Ground your decisions in data, not buzz. Track what AI is actually delivering, where it saves time, reduces errors, or enhances quality. Make staffing decisions based on this evidence, not aspiration.
Challenge your narrative. If you are using AI as the justification for layoffs, be honest about what is really driving the shift. Employees, customers, and shareholders are watching and expecting more than spin.
This is not about being anti-AI. It is about building enterprise systems and workforces that are designed for what is coming, not just shedding what is familiar. AI may be the accelerant, but the redesign is still up to us.
Bottom line: Stop optimizing for a future you have not yet built
Enterprises are rushing to cut headcount for AI efficiency without having done the work to understand what good looks like. Most have not redesigned roles, workflows, or orchestration layers. They are simply hoping that fewer people plus more tech will equal progress. That is not a strategy; it is a gamble.
Love him or loathe him, let’s be clear… President Trump’s new AI Action Plan isn’t just political theatre, it’s a strategic sledgehammer aimed at reshaping the global AI landscape in America’s favor. This is your wake-up call, whether you’re leading a tech firm or steering an enterprise. Align with the American AI stack, or prepare for a long ride in the slow tech lane.
Accelerating innovation and removing regulatory shackles
America is taking its foot off the regulatory brakes, although this could amplify ethical risks, reduce content protection, and potentially compromise data privacy as it seeks to become the undisputed AI leader. Trump’s vision is simple: remove barriers, build massive infrastructure, and force global alignment with US-controlled AI tech stacks and data centers. Forget subtlety, this is an aggressive, competitive manoeuvre driven by Trump’s showman antics!
Silicon Valley becoming America’s power center
With copyright protections sidelined and regulations slashed, the Valley is primed to dominate this AI-fuelled gold rush. However, this raises critical concerns, namely uncontrolled content exploitation by AI bots that could severely damage intellectual property rights, trigger extensive copyright litigation, and create significant ethical dilemmas related to data privacy and fairness. Additionally, this push reinforces a significant power shift toward Silicon Valley and away from traditional economic hubs like New York, further consolidating influence around tech giants at the expense of traditional finance and media sectors.
America’s free-for-all approach is in stark contrast to Europe’s regulatory quagmire
Europe’s AI Act emphasizes caution, human oversight, and sustainability, which could prove to be deadly slow in the AI arms race. Trump’s strategy couldn’t be more opposite by promoting innovation, accelerating infrastructure, and accepting (or just ignoring) inherent risks. For ambitious enterprises seeking to drive AI-first business and talent strategies, the choice is stark… bet on the fast-moving, albeit riskier, American stack or struggle through Europe’s costly and stodgy compliance maze.
This AI Action Plan could also create challenges for American firms dealing with European firms, in terms of meeting their compliance requirements, potentially increasing operational complexity and legal risks. Also, with Europe tightening controls and potentially imposing import taxes on tech services from countries not complying with its regulations, American enterprises might face higher costs, restricted market access, or increased scrutiny when serving European customers.
To mitigate these issues, American enterprises should proactively develop flexible, robust governance frameworks capable of adapting to both markets, clearly communicate compliance and ethical strategies, and engage directly with European partners to address potential concerns early.
Audit your tech stack immediately for Chinese influence
With Trump’s renewed emphasis on national security, enterprises must urgently audit their tech stacks. Firmware, data sources, and LLMs originating from China, such as DeepSeek, ERNIE Bot, Manus Tongyi Qianwen, 360 Zhinao, SenseFace, and Tencent, will be considered toxic. Be ready to answer regulators’ tough questions or face uncomfortable public scrutiny. Keep in mind, however, that auditing and removing Chinese technology can present significant operational challenges, including service disruptions, increased costs, and potential supply-chain complexities.
Take charge of your DEI and ethics strategy independently
The Trump administration is explicitly stripping DEI and ethics mandates from federal frameworks. This gives enterprises both freedom and responsibility, where US firms must now manage their own bias and misinformation risks, especially when operating internationally. Prepare for a dual-speed governance approach that tackles streamlined security for the US market with meticulous ethics and compliance for Europe.
Address the environmental cost of rapid AI expansion
The global environmental impact of AI infrastructure growth is immense. AI data centers alone could account for up to 3.5% of global carbon emissions by 2030 (source: International Energy Agency, 2023), which is even more than the emissions from the global aviation industry today. Enterprises racing to expand their AI capabilities must grapple with sustainability concerns as energy consumption and environmental footprints skyrocket.
American firms relying heavily on carbon-intensive AI infrastructure could struggle to comply with EU sustainability requirements. This scenario could raise operational costs significantly, making American solutions less competitive or less attractive to European partners who prioritize sustainability compliance.
Lean into both your developing human talent and AI ambitions to create a unique company culture and identity
The AI Action Plan presents significant implications for graduate employment and entry-level jobs in particular. On the positive side, increased investment and rapid innovation in AI technology will likely create new categories of high-skilled, high-paying jobs, particularly in AI engineering, data science, and cybersecurity. Graduates who acquire specialized AI-related skills will have considerable advantages in the job market.
However, there’s also a downside. Automation and AI could displace entry-level positions traditionally filled by recent graduates, potentially exacerbating graduate unemployment rates and creating a gap in career pathways. Enterprise leaders should proactively invest in young talent by aligning graduate recruitment with evolving skill requirements, emphasizing continuous learning, and developing pathways that enable graduates to work creatively and effectively with AI tools. Lean into both by creating roles that complement AI—focusing human effort on critical thinking, creativity, ethics oversight, process and systems governance, and innovation management.
The US has several robust training initiatives underway to support AI workforce development. These include NSF-funded National AI Research Institutes focused on sector-specific skills, the Department of Labor’s AI Apprenticeship Program emphasizing practical AI training, the Department of Commerce’s AI Centers of Excellence facilitating industry partnerships, and Workforce Innovation Grants aimed at boosting AI education in community colleges and regional institutions.
Become an AI quarterback and an indispensable leader
Business leaders must proactively position themselves as indispensable AI quarterbacks within their organizations. This involves developing a deep understanding of AI capabilities, limitations, and strategic implications for your business. Act as a bridge between technical AI teams and broader organizational strategy, effectively translating complex technical details into clear business insights.
Leaders should prioritize AI literacy, invest in executive education, and champion AI-driven initiatives across all departments. Foster a culture of curiosity and agility, encouraging your teams to experiment and iterate quickly. Your ability to lead AI transformations, manage risks, create smart governance frameworks, and leverage technology strategically will make you essential to your organization’s future success.
Bottom line: Invest hard, move fast, and exploit this AI freedom
Trump’s AI Action Plan signals permission to innovate aggressively. Push the limits, break the mold, and stop waiting for global consensus. This is your moment to place your big bets on AI with Uncle Sam’s backing, which means speed and boldness trump caution and inertia. Hesitate now, and you risk irrelevance.
Today’s analysts and advisors love talking about how the speed of AI advancements is turning every industry on its head, but most conveniently ignore the fact that their own industry is getting rewired faster than they can say “disrupted.”
The analyst and advisor industries, reliant on IP and research to market their products, are in serious trouble, and many firms will cease to exist in a couple of years. I mean, whatever happened to the likes of Omdia or 451? They already seem to have melted away into insignificance under some analyst firm roll-up scheme, smashing together mediocre events, marketing, and “research”.
I’ve been fortunate to be part of the analyst and advisory industry for three decades. I can only say it’s been a privilege to be paid to learn, to engage with so many smart people, and to build many, many relationships over the years based on trust, mutual respect, and friendship.
However, there have been warning signs for a long while that the comfortable status quo is already getting very rocky (as already witnessed by Forrester’s dramatic decline). And what’s really worrying is the recent speed of development with AI platforms, agentic software, and LLMs, which is, quite frankly, making the use of analysts and advisors increasingly irrelevant.
The issues are staring us in the face:
Generative AI platforms are fast replacing the need for analyst support. Routine research tasks, such as reports summarizing trends, market sizing, vendor comparisons, or basic scenario analysis, are increasingly being automated by generative AI.
Analysts are just too slow to deliver insight. The sheer speed of GenAI is challenging analysts to justify their premium pricing and timelines. Why pay for information that sometimes takes weeks to access, or even set up a call with an analyst? We are operating in a world of immediate decision-making, and many analysts are simply not adapting.
Cost Pressures will focus many firms to prioritize their GenAI platforms: GenAI significantly lowers barriers to basic insight, and many clients are already pushing analyst firms harder to justify their obscene subscription costs. In addition, the cost of enterprise tokens for GenAI platforms is pushing many CFOs to look at offsetting against legacy research costs. If you’re spending $500K+ a year on your enterprise OpenAI access, you’ll want to offset this against existing information costs, which will likely include analyst subscriptions.
Analysts are losing authenticity. So much analyst output today has become so jargonized that many research consumers are simply switching off. Who wants to hear the constant regurgitation of meaningless words like “orchestration” and “transformation”. Analysts using GenAI to craft their narrative immediately lose touch with a human audience who wants to hear something real, not more recycled nonsense.
Many analyst/advisor relations professionals are killing the analyst industry. Most tech and services firms persist in relying on prehistoric analyst relations professionals who have forgotten what “value” analysts provide to their firms. They live in a world of checking boxes for administering their executives’ briefings and justifying their large salaries by claiming they somehow drive influence and new business for their employers. I personally can’t remember the last time analyst/advisor relations professionals proactively called up analysts to understand their research agendas and craft an engagement model to get the most out of the relationship. These roles will likely get phased out in the next couple of years as the whole concept of analyst value deviates away from these transactional relationships that are becoming worthless in this age of LLMs.
In short, the whole concept of the value an analyst provides is changing very fast…
How the analyst industry can save itself
Stop cheating with ChatGPT. Now. As MIT scientists have discovered, do not use ChatGPT for your writing if you want to avoid accumulating Cognitive Debt. So if you are genuinely using ChatGPT to write your research for you, stop now. One, it will rot your brain, and many smart people can tell they are not reading the work of a human. Too many bullet points, obvious capitalization of titles, overuse of em dashes, articles that start with “in today’s challenging world…In today’s fast-paced environment” or some variation of it, overuse of short lists with bold titles. I’ve also started seeing ChatGPT-generated charts and diagrams, which don’t really make sense. Plus, some of these analyst articles sound like some corny American journalists in some blah magazine.
Dig deeper – don’t just skim the surface. Too much analysis feels like it’s been written after a quick skim of a press release and a glance at LinkedIn – too much seems generated. If you want to say anything of interest, you must get beyond the obvious. Ask the difficult questions. What’s really going on behind the trends? What are the implications people aren’t talking about? The best analysts cut through the fluff and reveal the real story – not just what happened, but why it matters and what to do about it. Bring insight, not just information. That’s how you earn trust and deliver value.
Be authentic. The one thing good analysts bring to the table is a human voice that should rise above the AI-manufactured cacophony of bullshit. They need to talk plain English to their subscribers. People are turned off by AI-generated content and the same old buzzword bingo, so rise above it, folks! Pretend you are explaining agentic AI to your Mom or the immigration officer who asks what you do for a living…
Lose the attitude. I hate to say it, but people don’t like assholes anymore. They want to like the voice they are hearing, to identify with the analyst, to learn from them, to empathize with them. They don’t want to be lectured and preached to constantly. If they identify with the analyst, they may actually pay to engage with them and get support and ideas from them. Why would you pay for a human being you don’t care about when you get your information from ChatGPT?
Just get to the bloody point. No one has time to read paragraphs of preamble these days. They need to know immediately what you are writing. The days of the waffling intellectual analyst are over. You have a tiny piece of attention-time to make your mark these days, and you need to scream to your audience why you are declaring something profound for their insight pleasure.
Invest in personal relationships – and not just with vendors. The most effective analysts today are those who have invested in their networks and relationships across their ecosystem. I can attest to the fact that you can gain from a lifetime’s knowledge from a person in an hour. Great analysts get to know the people buying technology and services, not just the ones who are marketing themselves to the buyer. You will be such a better analyst for being able to convey real buyer experiences than one who is merely parroting vendor marketing jargon. Great analysts tend to be great people with great personalities and relationships.
Use AI as an ally, not a competitor. The old saying that you won’t lose your job to AI, but to someone who can use AI better than you is VERY true with analysts. Use AI as a research assistant and sounding board, but NOT as your brain.
The Bottom-line: Be honest with yourself if you really want to stay relevant
Analysts need to accomplish three things if they want to avoid being replaced by agents and LLMs:
Influence people. You need to convince people that your experience and views matter, and that they actually follow you.
Advise people. You need to convince people that your research and wisdom matter, and they actually listen to you.
Connect people. You need to prove to people you have a great network of stakeholders across your value ecosystem, so they actually want to know you and spend time with you.
The traditional lines between CPG brands and retailers are blurring, with brands engaging consumers directly and retailers elevating private-label products to international rivals. The global supply chain instability is pushing them toward diversification for greater resilience. These firms are battling multiple fronts—margin pressure, shifting consumer preferences, operational complexity, and a relentless technology drumbeat. While the noise around GenAI, automation, and omnichannel disruption is deafening, executives are shooting sharper questions: What investments actually matter? Where should we double down now? What’s worth betting on for the future?
The lion’s share of tech budgets remains anchored in traditional strongholds: cloud computing (26%) and analytics (21%), which collectively command nearly half of all enterprise tech spending. But the real surprise lies in the swelling appetite for new-age AI: GenAI (10%) and agentic AI (7%), which now outpace traditional AI (6%) and underscore a dramatic pivot in enterprise AI adoption narratives. RPA and intelligent automation are still much alive (9%). Meanwhile, emerging tools such as blockchain and digital twins hover at the margins, but their moment may be approaching.
90% of IT and business services outsourcing spend maps to the eight domains of the HFS retail and CPG value chain. Over 56% is concentrated in just four areas: Data-driven product innovation, omnichannel CX, resilient operations, and immersive marketing and customer engagement.
Investments that clearly demonstrated business value and are now ready to scale include:
Personalization, driven by AI recommendation engines and GenAI content creation, is delivering a double-digit revenue uplift per user. Retailers using tools such as Salesforce Einstein or Adobe Target are driving higher conversion rates and increased loyalty.
Omni-fulfillment strategies—including BOPIS (buy online, pick up in store), curbside pickup, and ship-from-store—are now foundational, supported by cloud-based inventory management and AI-driven demand forecasting. Enterprises mastering this coordination enjoy 30% higher customer lifetime value.
Micro-fulfillment centers are helping to meet the growing demand for same-day delivery in urban markets, while bonded warehouses are improving global cash flow and customs agility.
Data-fueled product innovations, such as private-label SKUs based on trending ingredients or unmet category demands, is cutting time-to-market and improving launch success rates.
The report evaluates 27 retail and CPG service providers. Of these, 11 are classified as Horizon 3 Leaders, 10 as Horizon 2 Innovators, and 6 as Horizon 1 Disruptors. The evaluation included inputs from 44 enterprise reference clients and 36 reference technology vendors.
Horizon 1 represents Disruptors laying the foundation for digital efficiency by leveraging technology to drive cost reduction, speed, and operational efficiency in specific functions across the value chain.
Horizon 2 represents Innovators delivering end-to-end experience transformation i.e., Horizon 1 + elevating the entire value chain by creating integrated, customer-centric experiences through data unification and seamless interaction across touchpoints.
Horizon 3 represents Leaders showcasing ecosystem synergy and new value creation i.e., Horizon 2 + building ecosystems that unlock new business models, foster co-innovation, and create entirely new revenue streams, with an emphasis on sustainability and collaboration.
The Bottom Line: Retail and CPG leaders should prioritize investments in data-driven product innovation and omnichannel CX with cloud as the enabler, analytics as the propeller, and AI as the value generator.
Service providers that are rising beyond traditional services and capturing value through futuristic value-capturing models such as services-as-software are best suited to cater to the business expansion demand of the retail and CPG ecosystem.