AI graphic

Artificial Intelligence: It’s More Than Just Robotics

28 Nov 2017 by Lana Gates

Artificial Intelligence (AI) is cropping up in more and more articles and news stories these days. In August, Microsoft replaced mobility with AI as one of its top priorities. This came on the heels of the company’s acquisitions of AI startups Maluuba and SwiftKey and the establishment of a formal AI and research group. These moves are an effort to position the company to be a leader in this up-and-coming field.

But as early as April 2016, Google predicted a shift from a mobile-first to an AI-first world. Yet, there’s still a lot of speculation about exactly what AI is.

Many people think it focuses on robotics and their threat to take over the workforce. The term conjures up images like those portrayed in the 2001 movie “A.I. Artificial Intelligence,” in which a robotic boy develops human feelings, including jealousy of his human “brother.”

Developments such as Facebook’s AI program that created its own language make that idea seem less far-fetched.

In reality, however, AI is nothing to be afraid of. And it doesn’t strictly involve robotics. It’s actually much more than that. Let’s dispel the myths and define exactly what artificial intelligence is.

What is artificial intelligence?

AI is not new. The term was first used back in 1955 as the subject of a Dartmouth College conference, coined by computer science professor John McCarthy. As computer technology evolved, so did robotics, the science behind AI.

In 1966, Edinburgh, Scotland, hosted the first machine intelligence workshop. That was followed by the first International Joint Conference on Artificial Intelligence in Washington, D.C., in 1969.

Natural language processing emerged in the early 1970s with a demonstrated ability of computers to understand English sentences. Shortly thereafter, the Assembly Robotics group at Edinburgh University developed a robot that could use vision to locate and assemble models.

Fast-forward to 1997, when autonomous robotics played a major role in NASA’s successful landing on Mars. In the same decade, AI-based programs led to the expansion of the World Wide Web.

Twenty years later, the AI market is hot for growth. In fact, a report by McKinsey Global Institute forecasts the market for AI technology to experience “strong growth over the next three years.” Market intelligence firm Tractica concurs, predicting annual worldwide AI revenue to balloon from $1.38 billion in 2016 to $59.75 billion in 2025, as shown in Figure 1. That’s a jump of $58.37 billion in just nine years, or a compound annual growth rate of 52%. 

A bar graph depicting the revenues from the artificial intelligence (AI) market worldwide, from 2016 to 2025 (in million U.S. dollars)
Figure 1

But what is AI? According to the McKinsey report, artificial intelligence encompasses six major areas: natural language, autonomous vehicles, smart robotics, virtual agents, computer vision and machine learning. Accenture categorizes the six areas a little differently. While agreeing with computer vision, machine learning and natural language processing, it classifies the others as audio processing, knowledge representation and expert systems.

What is machine learning?

Machine learning has the most focus of the six areas today. In fact, the term is often used interchangeably with AI, and this category received the largest share of investment in 2016: nearly 60% of all investment from outside the AI industry, according to McKinsey.

The report defines machine learning as AI that “enables machines to exhibit human-like cognition.” PwC extrapolates on that, describing AI as “a key enabler, from on-demand translation services to weather forecasting, to guessing what users want based on where they are and what they’ve been doing.”

Examples of AI

IBM’s Watson is a good example of machine learning. The computer gained attention in 2011 when it won the game show “Jeopardy!” against two high-ranking champions.

But keep in mind machines like these can’t learn on their own. They’re programmed by humans. “It’s basic computing,” explains Mike Guggemos, chief information officer at Insight. “Machines aren’t thinking. People write scripts that tell chatbots what to do.”

IBM explains that “today’s machine learning uses analytic models and algorithms that learn from data, finding hidden insights without being explicitly programmed where to look.”

Examples of this can be found in our everyday lives. Did you engage in a chat session on a website? That was likely driven by AI that followed an algorithm to locate the information you requested. When you ask Siri or Cortana questions or give commands to Alexa or Google Home, you’re also accessing analytics and algorithms. Similarly, AI is at the heart of some security surveillance, big data analytics and accurate predictive technology.

Digging deeper: Deep learning

Some people argue that these examples of AI aren’t true representations of the technology. They believe authentic AI involves machines learning on their own. And that’s the part that raises fears in many individuals.

This form of AI is referred to as deep learning, a subset of machine learning. It requires teaching computers to learn by example. This is the technique behind self-driving cars. For deep learning to be effective, two key ingredients are required: large amounts of labeled data and significant computing power.

Fed by big data, deep learning can be used, for example, to analyze billions of transactions and predict what customers want to buy. It can also help retailers pinpoint product deliveries with drones. Automating hearing and speech translation is yet another example.

Elon Musk, the chief executive officer of Tesla, hasn’t been quiet about his fears of this type of AI. He’s called it the “biggest risk we face as a civilization.” With scenes from movies such as “iRobot” in our minds, his fears don’t seem unfounded. But in the near term, AI offers more benefits than it does harm.

Benefits of AI

The idea behind AI — specifically machine learning — is that it can be used for tedious, repetitive tasks, freeing people to focus on more thought-intensive projects. This can be seen in automated prescription reordering over the phone, which frees pharmacists to consult with patients and fill prescriptions. Another example is automated data analytics, which frees workers to concentrate on innovation.

Artificial intelligence offers numerous business advantages, across myriad industries. Some of the most notable benefits of AI are:

  • Efficiency — particularly in production and maintenance
  • More informed decision-making
  • Personalized experiences
  • Reduced costs
  • Targeted sales and marketing

Already, AI is being adopted to optimize sales and marketing efforts, fraud detection, credit risk scoring and customer service, as shown in Figure 2, just to give a few examples. But its potential is widespread.

Adoption of Specific AI Use Cases in 2017, by Category

A double bar graph depicting the adoption of specific AI use cases in 2017 by category
Figure 2

Looking ahead

The truth is AI will eliminate some jobs that can be automated, such as those of factory workers and manufacturing processes. Despite that, it will be the catalyst for new job creation, just as the First Industrial Revolution was in the late 1700s and early 1800s.

AI promises to improve traffic flow and significantly reduce the number of car accidents each year. It holds the potential to streamline the time it takes to get medical test results, decrease the number of misdiagnoses and even track health in real time. It will save us time by selling us what we want without needing to search among products. And that’s only the beginning.

Guggemos predicts AI will be the new cloud. “AI is not a transformative technology just because it can do the things that humans currently do,” he sums. “It’s a transformative technology because it empowers us to do more.” 

Keep your business relevant.

We’ll help you embrace technology and innovate so you can keep your customers engaged.

Learn more