Whenever the European Commission proposes legislation that relates to Internet companies or data protection, tensions flare and lobbyists have a field day. Suggestions of protectionism and stifling innovation quickly enter the public sphere.
Among the customary saber rattling, the General Data Protection Regulation (GDPR) became law on 24 May 2016, but the broader IT industry didn’t take much notice. Yet, the implications of the regulation are profound and could conceivably dramatically impact the way companies deal with cloud services and Artificial Intelligence. As the adoption of Intelligent Automation starts to accelerate with Cognitive Computing and Artificial Intelligence being critical building blocks, we sat down with lawyers at Squire Patton Boggs to discuss the repercussions for the broader IT industry.
What is the legislation all about?
The key elements as well as implications of the legislation include:
The GDPR is the European Union’s (EU) new data protection law; it replaces the General Data Protection Directive 95/46/EC.
It took effect on 24 May 2016 and becomes enforceable on 25 May 2018
The legislation imposes a uniform data protection law on all EU members, though national governments and Supervisory Authorities (SAs) retain substantial powers
Sanctions and penalties of up to €20 million, or 4% of global turnover, whichever is higher, for a variety of infringements, including:
Breaches of core data protection obligations (e.g., transparency, valid justification, accuracy, security)
Failure to comply with data subjects’ rights (e.g., to access, object to processing, be forgotten)
Failure to comply with rules on transfer of data outside the European Economic Area (EEA)
The GDPR regulates not only businesses with operations in the EU but also companies (whether controllers or processors) that have no EU presence if they are involved in monitoring the behavior of EU citizens or selling products or services to them
The GDPR regulates data processors directly for the first time. Processors must now maintain adequate documentation regarding all categories of personal data processing activities carried out for a controller. They must also implement appropriate security standards.
Major data breaches, such as the ones we’ve seen with corporate giants like Anthem and eBay or the infidelity website Ashley Madison, are well documented. We will not attempt to dissect the broader implications of this regulation in this blog post. However, two implications jump out.
First, the severity of the penalties of up to 4% of global turnover means that the regulators have the means to come down hard on organizations that are in breach of the legislation.
Second, and much closer to our research agenda, the last bullet point about regulating data processors goes to the heart of the As-a-Service Economy and applies to cloud providers, BPOs and Intelligent Automation providers in equal measure.
Service providers face direct responsibility for the data they are processing
So why is the regulation of data processors important? Because, for the first time, processors are directly liable for the data they are processing. Without wanting to drift into legalese too much, a data processor is an organization that may be engaged by a client to process personal data on their behalf (e.g., as an agent or service provider). In our industry, that could include cloud storage providers but also the burgeoning Artificial Intelligence segment. There are broad legal implications for processors, many of which are difficult to translate into simple English. But here’s a crucial one: processors must implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk involved, which means that the processor must make itself aware of the types of data involved and the associated risk levels.
In addition, processors will have to maintain records for the processing activities under their responsibility. Critically, these records must be made available to the supervisory authority on request. At the same time, processors have to guarantee confidentiality and security. For any breach the processor may be directly liable if it has not complied with the regulation or has acted outside the instructions of its client. The focus of the regulation is all about ensuring that processors assist their clients in protecting the freedoms and the rights of the individual by requiring processors to take responsibility for securing the data that they handle. They must also contractually obligate any sub-processors to do likewise. And they must assist their clients in meeting the requirements of the regulation, including in responding to data breach incidents.
So what does that mean in practical terms? Take the example of UK mobile operator TalkTalk. Its data breach is well documented. Under the new legislation the fine could be up to 4% of its turnover, which is massive. At the same time, when Wipro employees working on the TalkTalk contract were accused of making scam calls from their call center, it is likely that Wipro would be directly responsible.
However, beyond the broader and more generic issues, the most challenging clause for the journey toward the As-a-Service Economy comes from a stipulation on automated processing. It is so important that we are quoting it in all its legal splendor: “The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention.”
The legislation explicitly calls out that such profiling includes people’s performance at work, their economic situation, health, personal preferences and interests. This strikes at the heart of Artificial Intelligence and intelligent Automation at large. While fraud and tax-evasion monitoring are excluded, the thrust of Machine Learning could be seriously thwarted. However, it will probably take the first court cases to determine where statistical analysis of neural network and Machine Learnings ends and where personal data that potentially needs to be presented in court starts. Yet, the legal challenges for service providers become obvious as they have to able to answer to individual claims of breach of privacy.
HR processes and intentions of workforce reduction will be challenged
Let’s apply these stipulations to a couple of scenarios:
HR related processes will pose the biggest challenges. Many recruitment processes will come under scrutiny because the use of Machine Learning is widespread. To prove that many of these processes are not fully automated will be almost impossible. Similarly, using cognitive tools for performance management will come under the spotlight.
The use of cognitive and automation tools to assemble evidence to reduce staff can be challenged in court. Yet, beyond HR processes the impact will be also felt for broad customer onboarding processes where we already see broad use of RPA and Machine Learning. Profiling is not only used to enhance the user experience but to automate broad set of processes.
Organizations in the US—or the UK after Brexit—should be warned not to give this legislation short shrift. Because, as noted above, it applies to any company that markets goods or services to EU residents regardless of whether the company is located or uses equipment in the EU. To quote some sadly departed UK politician: “We are all in this together!”
Bottom line: The legislation needs to be enforced by national data protection authorities and, ultimately, in court.
Service providers need to evaluate the impact of GDPR on the way they deliver services. Suffice it to say, the legislation needs to be enforced by courts which can be cumbersome and costly. As with any data protection legislation, this will be a fluid process with continued lobbyism and with legal challenges. Thus, the legislation will not wreak havoc with the As-a-Service Economy but it will slow the journey down and make it more complex. And should you have specific questions, we will come armed with our lawyers.
The Global majors: IBM, Accenture, and HP, still dominate the top of the list
These top 3 service providers are likely to remain in place in 2016, unless Atos and Capgemini can pull some more rabbits out of the hat, with additional acquisitions – this time focused on the EMEA market. This is not such a big leap of imagination, given the length of these firms’ recent acquisition trails. However, it is likely that HP (or whatever it becomes) will overtake Accenture when it adds in CSC’s and, let’s not forget, Xchanging’s revenues over the course of the next year. Although, given IBM and HP’s recent weak growth performance, we expect Accenture to be the fastest growing of the top 3 over the next year, particularly given its recent strong financial performance, most notably last quarter’s double-digit growth, and its broader set of digital capabilities, beyond bread-and butter IT apps and infrastructure.
Atos and Capgemini remain close in revenue terms in Europe, and both firms seem to share similar sets of challenges and goals, albeit from slightly different perspectives – both are focusing management effort on growth in the US. Atos is bearing down and trying to solidify its position in the infrastructure management space, with software defined datacentre led approach. Capgemini is building on its broad consulting skills by building out specific industry capabilities and leveraging its IGATE assets. Simply put,both are vying to be their customers’ guide to the digital promised land, but taking different routes to take them there. Both firms show strong growth in the first half of 2016 (Capgemini 15.6% and Atos at 17.9% constant currency) as they continue to integrate the finances of the recent acquisitions. Additionally, the providers grew organically, 1.9% for Atos and 3.3% for Capgemini.
Capita consistently remains the biggest BPO player in EMEA, although most of its revenues are from the UK and Ireland markets (we estimate >95% revenue). Recent half-year financials showed a 5% growth, over its 7% growth in 2015. Over the last couple of years, Capita has started to focus on expansion into Europe, with the acquisition of Avocis at the start of 2015 – its biggest commitment to this strategy so far. With the emergence of the competitive Genpact as a serious contender for European BPO deals, Capita is being forced to broaden beyond the English-speaking customer base to avoid losing further market share. Brexit has not dampened its enthusiasm for the expansion. Management comments regarding Brexit echoed those made by TCS and Infosys, some potential short-term uncertainty, but likely medium term gains and we all gain clarity of what’s in store.
There is a stark contrast between the EMEA provider list and the North American list (which we are publishing next week, so watch out for a blog). The EMEA list contains far fewer offshore-centric firms, which still depend on English-speaking centric services for the lion’s share of their business. Both Cognizant and TCS are top 5 players in the North American market, but in EMEA only TCS has managed to claim a Top 15 spot. The UK list would feature most of the big five players, as well as a strong showing from TechMahindra. Although there has been some emerging success outside of the UK for all of the other offshore firms, TCS has been the only one to gain genuine scale, thanks to its focus on localisation and more entrepreneurial approach to expansion. Although, all of the firms have had some success in the Nordics, most notably HCL with its huge Volvo and Nokia wins, but TCS has been the only offshore firm to generate significant traction in continental Europe.
Bottom Line: Europe is still a battleground for the Traditional Service Providers, but expect their Indian-centric counterparts to become more prominent as global markets consolidate
The EMEA market is still a hugely important market for all of the Global services firms, and there are plenty of opportunities given its non-homogeneous nature. The reality of the matter is simply that EMEA is not one market. Indeed, the European Union is not one market – look at the relative success of the offshore providers, outside of the UK and the Nordics. The differing national markets all have a distinct character and require different capabilities from their service provider organizations, such as local regulatory, compliance, data privacy, labour laws and accounting expertise. Many of the individual European countries have specific laws governing where data resides and whether processes can be executed outside of said country. This is especially evident when you look at smaller scale clients, which need specific attention the large providers simply cannot scale down to support profitably. So as experience in Europe increases we expect to see the other offshore providers, in addition to TCS, scale up across the continent, especially as the English-speaking markets becoming increasingly overheated for commodity IT and BPO services. For the top 15 list itself, we expect a few changes further down the list, including with the entrance of SopraSteria or Arvato in addition to the boost to HP from CSC and Xchanging.
The services industry, and technology industry, are full of ideas that keep coming around. And they often fail several times before they finally succeed. Cloud is a great example, as the groundbreaking successor to hosting and before that timesharing. Many pundits saw the value of renting capacity instead of owning it. The market just needed a few iterations before we found a viable technological AND business model for it.
So here we are, in the services industry talking about outcome-based contracts. Again. Outcome based is pretty important at HfS Research: we think it’s transformative enough to be part of one of the eight ideals of the As-a-Service economy (digital plug and play services require an outcome-based model.) And of course, my first reaction when outcome-based discussion arise is “what’s different this time?”
Here’s what’s NOT different. Outcome-based contract negotiations are a mess. Mostly for some really important reasons in order of when you’ll likely come across them if you want to try outcome based:
You have to know what an outcome is. Seems simple, and in some cases it might be. If you want to sign a BPO deal for claims processing, that’s not too hard. There’s a pretty standard definition of a claim, understanding of how to process it, and if it’s actually been processed. But if you’re going beyond basic transactional outcomes to broader issues like improved customer satisfaction or higher integrity in your supply chain, then you’ll need to spend a boatload of time defining an outcome properly.
Worse than point one, you have to decide what outcomes matter. As soon as someone gets the idea to do an outcome based contract, someone else in your company will come along and ask “why this outcome? Why not that one?” These kinds of discussions bring out some nasty internal arguments. Because sure, everyone can agree that raising the stock price is important and good. But once you get into more operational metrics, every business unit and every executive has different opinions and priorities to get there. Balancing everyone’s priorities to make sure your contract focuses on the right outcomes is a mess.
Then you’ll get into heated discussions about cause and effect. When you start to get into negotiations with your supplier, you’ll get into a debate about whether the supplier can claim victory in ALL instances, or only if the supplier can prove that the outcome was a direct result of its work. If an outcome happens, was it because of the service provider or external factors? Let’s say a supplier offers to reduce your supply chain costs by 15% through a consulting engagement and one of the categories in the engagement is fuel. The cost of oil drops and now your supply chain costs have dropped – having nothing to do with the supplier. This one will go around in circles for weeks.
What does an outcome even cost, exactly? If you’re paying for outcomes with little-to-no knowledge of the supplier’s cost structure then you have no idea what you should be paying for that service. It’s like cloud – take this price or leave it. So maybe the price seems fair compared to what you think you’re spending internally. During the negotiation, your only real negotiation lever will be if the bid is competitive and you can compare across suppliers.
Making services into a “black box” doesn’t wipe out your regulatory and legal obligations. During negotiations and continuously afterwards you have an obligation to vet suppliers for compliance to government regulation, making sure the supplier operates legally and ethically on your behalf, and follows appropriate security measures. You can’t wipe out this responsibility by saying you only get the outcome. If you only focus on an outcome, you can easily play the “I don’t care how you deliver it” card. But if your supplier achieves that outcome by using slave labor or being noncompliant with regulations, then you’re still liable since the supplier is part of your supply chain.
Post contract, you’ll start to resent your supplier BECAUSE THEY SUCCEEDED. Let’s say the contract agrees to pay on an outcome like volumes of sales and then every time sales goes up you have to pay your supplier. It won’t take long for you to decide you’ve paid them enough, in fact probably paid them two times over what you would have paid in a traditional contract structure. And you’ll turn on your provider – who’s doing an amazing job! (Maybe you put in a stop-clause that agrees to pay on outcome up to a certain amount of money, but that’s more likely for consulting/project contracts than ongoing outsourcing ones.)
Good luck during renegotiation. Remember the point about not knowing cost levers? Chances are your bargaining position will be even worse if you just want to renegotiate because without the competitive bids, you have no basis for comparison. Did the supplier use bots and completely automate the process to get the outcome? Are they primarily labor based? Some combination? If the supplier’s costs are going down, how can you know if you’re getting any of that savings back?
If that’s what’s the same, here’s what’s different: The As-a-Service economy depends on outcomes. Outcome-based contracts used to be something leaders did, and even then only in a few relatively rare situations. But now it’s becoming a requirement. Who has time in this fast moving world where everyone wants to just plug into partners and suppliers and go? Part of being plug-and-play means having an outcome pre-defined and ready to deliver.
No one has time for long complex negotiations. And even though today outcome-based contracts are long and laborious negotiation efforts, if we all keep working on them, we’ll get better at them. We’ll find ways to fix the problems I just listed. Just like timesharing, hosting and cloud, the idea is the right one. If we want to change our businesses and build a competitive future, then we need to start our contracts with the end in mind. We need to focus on what has to get done and not micromanage how it gets done. We’re getting closer as an industry all the time. HfS is working hard on research into this space right now. So when it happens, we’ll get there together.
Have you ever had a meal that tickled your intellectual curiosity, delighted your sensory perceptions and of course, sated your appetite? Challenged your concepts of what a restaurant should be and what it should deliver? That’s what Chef Grant Achatz and his team at Alinea, Chicago are trying to create— over and over again. Much as I would like to have had the actual experience, it was while watching an episode of Chef’s Table that I saw eerily familiar themes– concepts we talk and write about in the services outsourcing industry everyday.
Here are three takeaways from Alinea that I put together that are meaningful “food for thought” for service providers, buyers and influencers alike:
The maker is as important as the consumer: Whether its for IT, business process services, call center operations, or analytics services, we are increasingly telling service providers that they have to think about the customer experience, and alter their service delivery metrics and workflows around delivering for the “customer first” organization. Sometimes its easy to forget that behind all the delivery are new generations of global workforces, that want to do meaningful work that “makes a difference”, instead of getting stuck in robotic, rote processes. Alinea sees the fulfilment of its team’s creative pursuits and experiences to be as important as those that it delivers to its customers. Chef Achatz describes, “Doing the same thing over and over again bores me. [My Colleague] will say, “Well, none of the guests that are coming in tonight have ever been to this restaurant before. So for them it’s all new.” And I go, Yeah, but…what about us?” We need better ways to link employee experiences to the work they deliver, and the ideas they continually generate.
Innovation and risk go hand in hand: With the As-a-Service Economy, and the evolution to the Intelligent OneOffice (or Dumboffice, as the case may be), we’re talking today about the eventual demise of the labor model, new opportunities for intelligent digital data, support and processes – and a transition period for an entire industry in the long run. Which ones will be able to make the shift, attract and motivate the right talent that “gets it”, make the smart platform and data buys, and articulate the most compelling visions for running the digital businesses of the future for their clients? Chef Grant takes reinvention so seriously, Alinea throws away perfectly good menus and starts from scratch on a regular basis. His colleague mentions on the show “We wanna lionize him [Grant] and romanticize him for creativity and innovation. But you can’t do it without being risky…What can you keep doing that’s new, that people will still like? And will you destroy yourself or destroy your reputation, or destroy the restaurant as a business…in the pursuit of doing something new?”
Three Michelin stars in, the idea of reinventing food and the restaurant experience is obviously still paying off for Chef Grant. Enterprise clients, by the same token, often state in our research that their hands are tied on innovation efforts with their service providers, for various reasons that have a lot to do with risk. Yet, we see a subsection of their peers succeed with collaborative engagements in place, creating joint innovation funds and chipping away at their legacy practices with their service partners. Innovation and risk – you cannot accept/expect one without planning for the other.
Reimagination is not a one man job: Alinea hit hard times despite all this success – in 2007 the press dubbed Grant as “the chef who couldn’t taste”, following his diagnosis of stage IV mouth cancer. A miraculous treatment saved his life, but took away his taste sensitivity. Grant powered through this phase with a renewed fervor to prove himself. He came up with even more provocative food concepts, and designed a system to communicate with his team on exactly how they were to be prepared (e.g. on a 1-5 acidic scale of pickles to bread…), and opened up other ideas for his sous chefs to experiment with more (how can we make food float?). This was revolutionary for an industry that thrives on “secret sauces” and ideas that chefs closely guard all the way to their graves.
We need to recognize that a lot of service providers today are in Chef Grant’s shoes – doing retail customer service without selling anything, running claims analytics without being insurance companies. This doesn’t exclude them from being innovative, or understanding the nuances of a particular industry. We need this caliber of human collaboration between buyers and service providers in the As-a-Service Economy, that can jointly contribute to executing on new ideas, without master-slave constraints.
These are seemingly broad, “soft” and intangible concepts, but they will dictate the level of success that service providers will have in either becoming OneOffice Enablers or left perfecting their backoffice outsourcing recipes. As for Alinea, they have just undergone renovations to rip apart and put together their well-run restaurant. In the episode, Chef Grant even wonders why plate manufacturers get to decide the canvas on which food is presented. He asks rhetorically, “Can we eliminate what we’ve been doing for the last ten years…and start over? And, uh… the answer is, “Yeah.” Sound familiar?
Fed up with 100 page profiles that focus on quantity as opposed to the key areas that really matter to you? If only there was something unbiased, concise, relevant and to-the-point that really helps us navigate sourcing providers’ key offerings and capabilities?
One question that crops up, time and again when we speak with outsourcing buyers, is the need for easy to use resources to help identify and select the right provider for the right task. HfS is launching a new type of report that delivers buyers a view of an individual service provider’s capabilities across both horizontal and industry vertical offerings. This is in addition to our flagship blueprint reports, which provide sourcing buyers with a view of the relative performance of providers in a particular offering space.
These Buyers Guides will be (as the name suggests) focused on the research needs of service buyers vetting potential IT/BPO service partners, providing in depth, referenceable insight. The Guides will include service provider people, process and technology capabilities, key financials, client examples, excerpts from published HfS Blueprints, strength, challenges, analyst insight as well as maturity modeling on the Eight-Ideals of the As-a-Service Economy. All factors that influence buyer’s decision-making today but more importantly, future-proof tomorrow.
So keep an eye, for the first of the buyer’s guides, starting with Genpact. They should appear on www.hfsresearch.com over the next two weeks. If you have any feedback or suggestions for buyers guides you’d like to see, please reach out as these are always welcome.
Long before it was turned into Hollywood film, Douglas Adams’ The Hitchhikers Guide to the Galaxy was one of my favorite books. It reminds me of the unburdened days of my youth when the book’s one liners and quotes were secret code among my friends. Among them was “42” as the answer to the ultimate question of life, the universe, and everything, calculated by an enormous supercomputer named Deep Thought over a period of 7.5 million years. To explain the meaning and the vision of Intelligent Automation, I wish I could throw a “42” at you.
Problem is, there are no simple answers.
To learn more about the complexity around the notion of Intelligent Automation, HfS has launched the inaugural Intelligent Automation Blueprint. Over the next several weeks I will share some of the learning from that project with you, starting with Capgemini today.
The HfS Intelligent Automation Blueprint assesses the delivery of comprehensive automation strategies
When HfS launched the Blueprint, there was broad encouragement and endorsement by the leading service providers, resulting in the project being oversubscribed. The stakeholders agreed that the main exam question should be how service providers orchestrate diverse sets of automation within the context of service delivery. How are they proactively transforming the processes for clients? Thus, the emphasis is not on task automation or isolated point solutions, but on automation from a business function or process point of view. The work through the Blueprint process is a litmus test for the state of industry.
The Intelligent Automation broader market is maturing
Capgemini is a compelling example how the industry is maturing. So far, Capgemini had built out some strong RPA capabilities and was starting to expand the Intelligent Automation skills to application management around its Autonomics PaaS platform. Fast forward to July 2016 and Capgemini just announced the Automation Drive suite of services that is aiming to leverage the disparate automation skills as well as four CoEs across the traditional business units. As a result, the company is addressing the issues we have earlier. The next logical step probably would be to organize those capabilities as one CoE on Group level. In practical terms Capgemini has expanded the RPA methodology to the broader notion of Intelligent Automation. Similar to many discussions on the journey toward the As-a-Service Economy, the key was a change in mindset as many delivery practitioners had to be introduced to the intricacies of Intelligent Automation. A further goal for the Automation Drive initiative is to progress to the next level of automating the automation, increasingly underpinned by a DevOps flavor. The ultimate vision is one to evolve toward a Digital Delivery Center where the capabilities of the automation CoE are overlaid with governance and control at the business unit level.
The Bottom Line: We urgently need a debate on the transformation of knowledge work
Two other aspects caught our imagination in the discussions with Capgemini. First, the company has started to deploy IBM Watson to achieve better management of the resource bench, more accurate staffing, and anticipation of gaps, rotations as well as the optimization of the “fresher” intake.
Second, Capgemini has launched an Intelligent Automation Academy to train and upskill staff so that they can move into consultancy and advisory, product selection, and proof of concept development among other things. If successful, the Academy is likely to be extended to broader Analytics and Cognitive skills. These two initiatives are important because they are part of the transformation of knowledge work that has been all too often neglected.
Data scientists and cognitive skills don’t grow on trees. Therefore, formalizing the upskilling process is a very sensible idea. The proof will be in the pudding of successful transformational projects. The answer might not be “42” but a much more holistic approach to Intelligent Automation.
Ever wondered who the leading 50 BPO providers are across the globe, when we add up all relevant revenues? Well, you need look no further:
Source: HfS Research 2016 estimated from services provider financials. Revenues are fitted to nearest calendar year. We attempt to make the BPO services numbers as close to HfS definitions as possible. The market primarily used for this list is the horizontal BPO processes of F&A, HR, Customer Care/CRM, and Procurement. Some industry-specific back office processes are included but we have excluded specialist categories, for example, banking securities.
We have segmented the providers into 5 broad categories: HRO specialists, Customer Care specialists, Multi-process BPO, Multi-process IT & BPO and document management providers. The specialist areas: document management, customer care and HRO should be fairly clear—the vast majority of the services these company provides in BPO is related to this category. The IT multi providers and BPO multi providers—divides the companies that provide multiple types of BPO services into those with an IT heritage and those without. These categories are subjective; we based these splits partly on the type of services they provide and individual company background. For example, Accenture provides multiple types of BPO service and has a sizable IT services business so we have described as a IT multi.
HfS subscribers can download the full report, authored by Jamie Snowdon, Barbra McGann and Phil Fersht by clicking here
Paris headquartered Workday specialist service provider, everBe, recently announced the opening of a new ‘Global Excellence Service Centre’ in Bordeaux, France. everBe selected Bordeaux from a list of 10 locations, because, to quote the CEO, Jean Manaud:
“Finding a location where our staff could raise families and enjoy a quality life was a major criteria for us.”
I have read hundreds of press releases over the years of service providers opening delivery centers or Centers of Excellence around the world, to support local clients and/or be the hub for a particular solution capability. Typically, service providers highlight the ability to offer local support, with resources who understand local culture and language, and generally make local enterprises feel understood and loved. everBe will of course tick this off as well, but it recognizes that the most important element is to attract good people and retain them. everBe aims to support Workday Human Capital Management and Financial Management deployments and management services from this center, to which it wants to attract 30 consultants in the first round of hiring.
everBe has actually considered what people need to be happy in a job. It’s way beyond just having a stable job, the opportunity to advance skills, and getting a good salary. It’s also about actually liking where you live, and being happy there with your family.
Wow. The service provider pendulum is definitely continuing to swing towards focusing on its people.
I started playing Pokemon in college. More or less at the same time, I watched my first sci-fi movie Minority Report, which blew my mind and I started imagining the role of many futuristic technologies including augmented reality (AR). I could have never imagined, 14 years later, the combination of these two (Pokemon and Augmented Reality), Pokemon Go would become such a craze, adding $7 billion to a company’s valuation in just a couple of days. It also leads me to think again about the use of AR in engineering services.
Augmented reality has a 360-degree relationship with engineering services.
On the one hand, AR augmented the existing product design, analysis, manufacture and services capabilities and on the other hand augmented reality applications are built using software product engineering.
Augmented reality enhances engineering services by adding visualization capabilities and takes the concept of the digital clone to the next level. The capability to visualize real objects can be a very powerful tool across the engineering services value chain of design, analysis, manufacturing, and services and can reduce time and cost significantly, while at the same time improving quality. Some of the AR use cases across engineering services value chain are shown below.
Augmented reality is made possible because of software product engineering services. The ISVs rely on software product engineering support across design, architecture, development, testing, maintenance, integration, mobility, product management and localization for supporting AR applications. Apart from engineering, AR has strong use cases in tourism, government, entertainment, healthcare, construction, military, logistics, retail, internet, etc. which ISVs will like to capitalize.
AR also has strong use cases in engineering education. I remember how much pain it was, always looking for lab and equipment manuals in the middle of experiments and difficulty in imagining engineering objects from different angles in engineering drawing classes. Augmented reality can help future engineers in all these areas, and much more.
Okay Sherlock, what has Pokemon changed for augmented reality in engineering services? The two-word answer is “User Adoption.”
The augmented reality solutions mentioned above already exists both in labs as concepts and implemented in some of the advanced factories. Almost all CAD ISVs support AR applications. I have tested some of these solutions myself in the labs (the perks of being an engineering services analyst!), and the overall feeling was that it is still not intuitive, or user friendly. The limited success of Google Glass and other similar gadgets have reinforced the belief that user adoption will take time. It was similar to journey of mobile smart phones before iPhone era. And suddenly, the Steve Jobs iPhone moment changed everything and lead to the birth of mobile based unicorns such as Uber, WhatsApp, Instagram, etc.
The user adoption of augmented reality is a good opportunity for all three stakeholders – enterprises, ISVs, and engineering service providers to revisit their augmented reality strategies. Enterprises will have the confidence of higher user adoption among their employees, and can invest in augmented reality technologies further. ISVs can revisit their product roadmaps and prioritize augmented reality related features and applications. Engineering service providers can take the lead and develop/ enhance their augmented reality expertise and should even look at inorganic options too, with many potential AR startups emerging in the industry such as nGRAIN.
To conclude, this Pokemon phenomenon may be a fad or it maybe we are looking at that Steve Jobs iPhone moment which changed user adoption forever. Whatever the case maybe, we will be keeping track of AR developments in engineering services and include augmented reality as one of core areas in our upcoming Industry 4.0 Blueprint.
HfS produces a roundup of the quarterly financial results (see our recent one for Q2) – which means we have to wade through lots of financial data and listen to quarterly calls. We also read various bits of commentary around the performance. One thing that always surprises me is how much commentators make of one single set of results and even one small part of those results. I ought to say that I look at the results as indicators for the market as a whole and the individual supplier performance. I tend to focus on annual changes rather than quarterly changes. So I am always a little shocked when the bigger picture is lost. There’s nothing worse when investment analysts and market pundits take such a short term quarterly view of the world, so let’s take a step back and look at it long term.
When I looked at Infosys Q2 2016 results (I use calendar quarters not fiscal – I am referring to Q1 2017 fiscal) I thought they looked OK. Growth was OK, down from the predicted growth, but round and about on par with the overall market performance – and certainly better in market terms than the last two years, except last quarter. Operating margin looked OK / consistent with previous performance. While I understand there are other metrics to measure success – I think this shows a pretty good showing, nothing concerned me.
Then I see the commentary which seemed to suggest the CEO was doing a bad job and lots of quite negative thoughts, disappointing results, etc… – mostly concerning the forecast adjustment. I certainly get the value in accurate forecasting, and consistency in forecasting well is a sign of smart leadership. However, in a market that is going through the major upheavals seen in IT and BPO services at the moment, the ~10% growth year-on-year enjoyed by Infosys is at the top end of the scale in performance terms. To put this in context, Accenture’s last quarter was not as good, and neither was the stalwart TCS. We have yet to see other contemporary’s Cognizant and HCL Q2 results which may show an equivalent dip.
Because of the criticism leveled at Vishal Sikka – I looked at Infosys’ long-term performance and mapped against each CEOs tenure. Looking at the chart, the financial performance during his reign seems to be pretty good. With increasing revenue growth and stable operating margins. This quarter has shown a modest drop in growth of a couple of percentage points (from 13.3% or 15% in constant currency in Q1 to 10.9% or 12.1% in Q2), but it just seems too early to say this is a trend.
The bottom line: you cannot judge a provider’s performance effectively with one set of results.
Moreover, without seeing some of the other results, it is difficult to gauge – as success can really only be measured in market terms and is relative. However worrying any dip in growth is, we will not know if it is a problem until next quarter, after all the results are out.