IBM’s Watson is coming of age

| ,

IBM’s Watson has come a long way since winning a Jeopardy contest in 2011. While popular games remain a benchmark for advances in Artificial Intelligence as seen in Google’s DeepMind winning at Go, Watson’s capabilities have evolved strongly. So much so that IBM is betting much of its fundamental transformation on the deep investments in the development of Watson. Thus, Watson has become a key strategic pillar for IBM next to cloud and Bluemix. Having had the opportunity to attend the World of Watson in Las Vegas, one couldn’t help to notice the scale of the evolving ecosystem as more of 17,000 people attended the gathering. Suffice it to say but the scale also references the complexity of the evolving ecosystem.

Charting the complexity of the evolving Watson ecosystem

The issue that struck us the most in Las Vegas was the comprehensiveness (put positively) or complexity (put slightly more negatively) of the various Watson offerings. The core Watson Cognitive Platform is composed of four components: cloud, content, compute, and conversation. From a service delivery perspective, the two key components are conversation and content. Within the conversational services, Watson Conversation enables developers to create dynamic interactions and custom applications using the full spectrum of Watson services.

In a nutshell, the two big themes in client examples we saw coming from the event are:

  • IBM enabling the creation of more and more ubiquitous chatbots in more front office focused activities to create “customer delight.” Pearson and Staples shared their use of Watson to drive new ways for customers to engage with their brands, with Pearson’s ‘interactive textbook/learning’ concept and Staples’ ‘That was easy’ button powered by a provisioning assistant. General Motors is putting Watson to work in its Onstar vehicle navigation system, which it is turning into a mobile-commerce platform in partnership with IBM.
  • Watson Virtual Agent enabling business users to quickly configure and deploy a virtual agent, without needing specific technical skills. The key value proposition focuses on providing these agents with pre-trained knowledge with the crucial differentiation of being able to take action. IBM is also rolling out new categories for Watson, which will focus on the notion of conversational applications – the next generation of systems that have the ability to interact naturally with business users to make operational decisions, in the areas of marketing, commerce, supply chain management, work, education, talent, financial services along with the existing Watson health group.

Increasingly the context and content for those agents is highly vertically specific, underpinned by the Watson Knowledge Studio. Critically this Studio can teach Watson the nuances of natural language in the cloud without writing a single line of code. The broader foundational services of Watson include a plethora of capabilities including visual recognition, emotional analysis and personality insights for consumption preference features.

At the same time IBM is accelerating the differentiation both from lower level chatbots but also other Virtual Agents by integrating industrial scale analytics capabilities through the Watson Data Platform. By announcing Watson Machine Learning that is available through APIs, IBM is further enhancing the ability to create vertically relevant insights. With the company’s new quest to “own all the data”, including acquiring The Weather Company and its Twitter partnership, IBM is very much highlighting Watson’s ability to draw insights from datasets that its competitors don’t have access to. This makes data and analytics services perhaps the most crucial differentiation of the Watson portfolio against other AI competitors. Despite all this complexity in innovation, in our view the most important aspect of the Watson portfolio is the portability of data. This mitigates concerns about vendor lock in. If needed data can get anonymized, however, customers that go down that route would lose the upside of continuous learning.

Overall, the examples are starting to trickle in, and experiments are underway for both client use cases and embedding of Watson capabilities into the broader IBM organization (e.g. GBS group, Cognos). What we still see as market confusion/hype is the pace of change, especially at the forefront with the timeframe for deploying Watson in a client environment. A financial services client we spoke to that has been on this journey with IBM Watson for a couple years now and had an insightful comment, “Watson used to be a product; it is now a brand.”

IBM must focus on not just painting the art of the possible with Watson, but also presenting the market with realistic expectations taking into account the level of customization, tuning and data curation needed to start driving value from Watson, the progress on integration into other groups and IBM’s overall positioning around Watson as a big scalable business as of today. Marketing strategies aside, in the Intelligent Automation market, we believe there is no such thing as a turn-key solution. it is all about transformation and data curation should be the centerpiece. As the player with the broadest set of capabilities in the cognitive market, IBM has had the largest learning curve that it can use to its advantage to guide enterprise clients on the journey to more Intelligent Operations.

The rise of the Virtual Agents

The strong expansion of capabilities of Watson is aligned with the findings of HfS inaugural Intelligent Automation Blueprint. Despite the market’s obsession with RPA, HfS is seeing broad AI capabilities coming to the fore. In particular, we are seeing the emergence of the notion of Virtual Agents that are underpinned by broad process and automation capabilities. These agents range from the heavyweights Watson and Amelia to OpenSource avatars. However, in this context Watson is standing out as IBM is turning it into an ecosystem play where partners can provide services and capabilities on top of the building block.  At the same time, we are seeing traction of cognitive engines, such Celaton for integrating unstructured data or RAVN, as an example of vertically focused machine learning and Enterprise Search. Stay tuned for our upcoming research on the changing notion of service agents through cognitive computing.

The holistic notion of IA but also the disruption stemming from it, is best described by the case study of KPMG. The company planning for and investing in the disruption of their core business, i.e., tax, accountancy, and advisory, which can hardly be described as being high on the technology affinity list. If we see the rise of robo-accountants, one gets a feeling for how the disruption around activities such as compliance, reconciliation or data entry will look like a few years’ time. However, in the view of KPMG this rise is a double-edged sword as it is expected to create more work for its partners through much more thorough and efficient discovery processes.

 Adjusting the ethics of AI

Having listened to the many use cases in the medical sector, notably around oncology research, is nothing short of humbling. Yet, the strong acceleration of AI raises also many ethical questions. Fundamentally, as an industry we have to help clients with the transformation of knowledge work but also have a much more honest and transparent debate of ethics. A first important step in this direction is the formation of what is awkwardly called the “Partnership on Artificial Intelligence to Benefit People and Society” supported by Google, Amazon, Facebook, Microsoft and IBM. More commonly referred to as AI Alliance, the new body is tasked to “conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology”. While it is easy to put down these ambitions as an aspiration, more openness and clarity is critical to advancing service delivery in a responsible manner. If we are really moving toward a virtual workforce, a blend of humans and algorithms, and broad consensus on ethics is mandatory.

Posted in : Cognitive Computing, smac-and-big-data

Comment1 ShareThis 21 Twitter 0 Facebook 0 Linkedin 0

1 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. cognitive automation seems promising but lack of real time example for mundane/transactional tasks of corporate offices make it difficult to imbibe…. because how to convince top management without proven track record is the issue…..

Continue Reading