Advertisement

Cognitive Computing Demystified: The What, Why, and How

By on

“Cognitive Computing comes from a mashup of cognitive science — the study of the human brain and how it functions — and computer science, and the results will have far-reaching impacts on our private lives, healthcare, business, and more,” says Bernard Marr, in What Everyone Should Know About Cognitive Computing. “Some people say that Cognitive Computing represents the third era of computing: we went from computers that could tabulate sums (1900s) to programmable systems (1950s), and now to cognitive systems.”

IBM Research sees the opportunity for “widespread improvements in quality of life” when Cognitive Computing is used to improve or enhance understanding, productivity and efficiency, and people are given “appropriate control over, or feedback to the system.”

Terminology: Artificial Intelligence and Cognitive Computing

Although the two terms are widely used as synonyms, Hadley Reynolds from Cognitive Computing Consortium, speaking at DATAVERSITY® Smart Data Online 2016 Conference, drew a distinction between the two terms. Reynolds said that with Artificial Intelligence, the machine is an autonomous actor, similar to the robot in the movie “Her,” and its computing is designed to work like a human brain, able to create and act on its own. With Cognitive Computing, the machine serves as an informational tool, dependent on a human to act or inform.

Although he sees these differences in theory, in practice, he said, the terms Artificial Intelligence (or “Augmented” Intelligence, according to IBM) and Cognitive Computing are “used interchangeably in any context where the computer serves in an advisory role.”

What is Cognitive Computing?

Bernard Marr defines Cognitive Computing as the simulation of human thought processes in a computerized model, “using self-learning algorithms that use data mining, pattern recognition, and natural language processing, the computer can mimic the way the human brain works.”

He goes on to say:

“These cognitive systems rely on Deep Learning algorithms and neural networks to process information by comparing it to a teaching set of data. The more data the system is exposed to, the more it learns, and the more accurate it becomes over time, and the neural network is a complex ‘tree’ of decisions the computer can make to arrive at an answer.”

 How is Cognitive Computing Being Used?

Sue Feldman from Cognitive Computing Consortium, also speaking at the DATAVERSITY® Smart Data Online 2016 Conference, said that Cognitive Computing is being used to assist with health care, banking, and land lease management. In addition, she says that there are now companies making assistive devices for people with disabilities. The IBM Research site reports uses in social services, education, transportation, public safety, the environment and infrastructure as well. Feldman adds:

“These cognitive applications are designed to create a human-computer partnership and to engage in a dialog, as both of you progress along this journey toward understanding and decision-making.”


 Cognitive Computing Examples

Designed for a non-tech user, a “Cognitive Oncology Assistant” can be in the room with a doctor and patient and be engaged in a conversation, offering tests and potential treatments, and altering those suggestions based on patient preferences that are uncovered during the visit, Feldman said. The Assistant can provide underlying evidence for treatment recommendations, with answers and questions pulled from vast stores of Big Data in the field of medical research.

“I’m really looking forward to this kind of assisted health care,” says Feldman. “No doctor can possibly sort through all this information and find patterns and relationships, and in fact that’s what we see as a real benefit to both Big Data and to Cognitive Computing.”

In banking, Feldman said, senior investment strategists are using a “cognitive banking investment advisory service” that gathers relevant news, understands the economy, and knows the companies and marketplace for investments. The service can help a strategist leverage their personal network to best advantage. It can also send an alert when market factors change, based on the goals of the company, using internal and external knowledge.

A service that Feldman called “mind blowing” is a cognitive assistant for blind or visually impaired people.

Combining predictive models about how to navigate a physical space with models of objects – like chairs, windows and doors – and their attributes, with voice recognition, machine vision, and language understanding, the assistant helps blind or visually impaired people navigate the physical world more independently, even if they haven’t been to a location before.

Using a smart phone, Feldman said, a person could leave home, get to a nearby store, make her way to the cookie aisle, have the ingredients read to her, get a reminder of her diet goals, buy her snack and help her safely navigate the trip back home. If she meets a friend on the way, the assistant could tell her who the friend was and what mood he was in, using her contacts list, and facial and emotion recognition.

In a post on the IBM Research blog entitled IBM Research Takes Watson to Hollywood with the First ‘Cognitive Movie Trailer,John R Smith, IBM Fellow and Manager of Multimedia and Vision, said that researchers used IBM’s Watson to identify significant emotional scenes in the movie “Morgan,” for the purpose of creating a movie trailer in record time:

“Traditionally, creating a movie trailer is a labor-intensive, completely manual process. Teams have to sort through hours of footage and manually select each and every potential candidate moment. This process is expensive and time consuming – taking anywhere between 10 and 30 days to complete. From the moment our system watched Morgan for the first time, to the moment our filmmaker finished the final editing, the entire process took about 24 hours.”

Noting that it was a true collaboration, “Our system could select the moments, but it’s not an editor,” by pairing human and cognitive machines, IBM was able to shorten a process that takes weeks without cognitive help, to a single day.

IBM Research and Watson also collaborated with the Marchesa designers to create a “Cognitive Dress.” An article on the IBM blog entitled Weaving Cognitive into Couture: Watson and Marchesa Collaborate for the Met Gala outlines the process used:

“Rooted in the belief that color and images can indicate moods and send messages, Marchesa first selected five key human emotions – joy, passion, excitement, encouragement and curiosity – that they wanted the dress to convey.”

After seeing hundreds of images associated with Marchesa dresses, Watson was able to suggest color palettes that were in line with the identified emotions and Marchesa’s brand. Designers blended their own expertise with Watson’s suggestions and IBM’s cognitive technologies to create an interactive, wearable gown.

The dress was programmed to change color in response to the emotional tone of comments from viewers on social media the night of the gala.

When to Use Cognitive Computing (and when not to)

Cognitive Computing extends computing to a new set of complex, human, ambiguous problems, but it’s not applicable in every context, said Feldman. The value received must be justified in terms of cost and productivity, or should provide a competitive edge. It shouldn’t necessarily replace those already in use.

 When to Use:

  • When problems are complex
    • Information and situations are shifting and the outcome depends on context
  • When there are diverse, changing data sources
    • Using structured data with unstructured data, like text or images
  • When there is no clear right answer
    • Evidence is complex, conflicting or ambiguous
  • When multiple ranked, confidence scored options are needed
  • When unpredictability makes processing intensive and difficult to automate
  • When context-dependent information is desired, based on time, user, location, or point in task
  • When exploration or work across silos is a priority

When not to use Cognitive Computing:

  • When predictable, repeatable results are required
    • Sales reports, inventory tracking
  • When all data is structured, numeric and predictable
  • When human-machine natural language interaction is not necessary
  • When a probabilistic approach is not desirable
  • When shifting views and answers are not appropriate or are indefensible due to industry regulations
  • When existing transactional systems are adequate

A white paper by Future Research LLC, entitled, The Seven Core Technologies Driving Digital Transformation, says the key is to find balance:

“The dilemma businesses will have to face in the coming five to ten years will be in striking the right balance between automating tasks that no longer require human intervention or management, and enhancing human-driven roles with AI solutions. Equipping decision-makers, project managers, attorneys, retail clerks and analysts with AI capabilities may ultimately be a more efficient model than replacing them with machines altogether.”

Challenges to Adoption

Reynolds said that according to a recent IDC survey, market awareness among companies is low, despite consumer demand for more cognitive features. Although there is potential for more widespread use, most of the companies currently making strides in Cognitive Computing are larger companies like Apple, Tesla, IBM, and Google. Reynolds says that health care, finance, and sales, and marketing are the industries with the earliest applications on the market. Multiple vendor interpretations, a lack of trusted sources and credible guidelines, skill sets in short supply, and uncertainty about ROI for what is a highly customized, expensive product, are indications that Cognitive Computing is in the “Early Chaotic Era,” needing time to mature before it can offer a smooth transition to this technology for more companies, he said.

Moving to new technology can also have social and legal fallout, and cognitive solutions can go awry, according to TechRepublic’s article, Top 10 AI Failures of 2016. Biases or omissions in information programmed into cognitive systems can negatively affect outcomes, and “freakish accidents” are rare, but can occur:

  • When The First International Beauty Contest Judged by Artificial Intelligence used a data set that lacked diversity, all the winners picked had light skin.
  • Microsoft’s Tay, an online chatbot, which was modeled on a teenage girl, quickly devolved into a Hitler-loving, feminist-bashing troll, reflecting and magnifying negative tweets made by internet trolls interacting with it.
  • A “crime-fighting robot” in Silicon Valley injured a young boy in what the company calls a “freakish accident,” and a fatal accident occurred with a Tesla running on autopilot. According to TechRepublic, “There have been other fatalities linked to autopilot, including one in China, although none can be directly tied to a failure of the AI system.”

Hadley Reynolds remarked that ethical questions have yet to be answered concerning responsibility for computer errors that harm people, data ownership and control, and job disruption, while there is no body of legal knowledge around this issue yet. IBM Research also sees the need for answers, and says that by developing best practices, they hope to achieve a level of trust. From IBM’s Safety and Control Issues for AI:

“To reap the societal benefits of Artificial Intelligence, we will first need to trust it. That trust will be earned through experience, of course, in the same way we learn to trust that an ATM will register a deposit, or that an automobile will stop when the brake is applied. Put simply, we trust things that behave as we expect them to. But trust will also require a system of best practices that can guide the safe and ethical management of AI; a system that includes alignment with social norms and values; algorithmic accountability; compliance with existing legislation and policy; and protection of privacy and personal information.”

Cognitive Computing is designed to work alongside human beings to address complex human problems, and as IBM Research’s Smith says, “The combination of machine intelligence and human expertise is a powerful one.” Reynolds said that in ten years, we won’t be talking about Cognitive Computing, because elements will be embedded in most new software apps, becoming ubiquitous. He said that research is spotty and best practices have not yet emerged, but with a central credible resource, and a community of experts, “we can expect to be surprised by new ways of interacting with machines we’ve never imagined.”

Leave a Reply