Dark logo

Fueling the rise of machine learning and deep learning is the availability of massive amounts of data, often referred to as big data. If you wanted to create an AI program to identify pictures of cats, you could access millions of cat images online. The same is true, or more true, of other types of data. Various organizations have access to vast amounts of data, including charge card transactions, user behaviors on websites, data from online games, published medical studies, satellite images, online maps, census reports, voter records, economic data and machine-generated data (from machines equipped with sensors that report the status of their operation and any problems they detect). So what is the relationship between AI and big data?

This treasure trove of data has given machine learning a huge advantage over symbolic systems. Having a neural network chew on gigabytes of data and report on it is much easier and quicker than having an expert identify and input patterns and reasoning schemas to enable the computer to deliver accurate responses.

In some ways the evolution of machine learning is similar to how online search engines evolved. Early on, users would consult website directories such as Yahoo! to find what they were looking for — directories that were created and maintained by humans. Website owners would submit their sites to Yahoo! and suggest the categories in which to place them. Yahoo! personnel would then vet the sites and add them to the directory or deny the request. The process was time-consuming and labor-intensive, but it worked well when the web had relatively few websites. When the thousands of websites proliferated into millions and then crossed the one billion threshold, the system broke down fairly quickly. Human beings couldn’t work quickly enough to keep the Yahoo! directories current.

In the mid-1990s Yahoo! partnered with a smaller company called Google that had developed a search engine to locate and categorize web pages. Google’s first search engine examined backlinks (pages that linked to a given page) to determine the relevance and authority of the given page and rank it accordingly in its search results. Since then, Google has developed additional algorithms to determine a page’s rank (or relevance); for example, the more users who enter the same search phrase and click the same link, the higher the ranking that page receives. This approach is similar to the way neurons in an artificial neural network strengthen their connections.

The fact that Google is one of the companies most enthusiastic about AI is no coincidence. The entire business has been built on using machines to interpret massive amounts of data. Rosenblatt's preceptrons could look through only a couple grainy images. Now we have processors that are at least a million times faster sorting through massive amounts of data to find content that’s most likely to be relevant to whatever a user searches for.

Deep learning architecture adds even more power, enabling machines to identify patterns in data that just a few decades ago would have been nearly imperceptible. With more layers in the neural network, it can perceive details that would go unnoticed by most humans. These deep learning artificial networks look at so much data and create so many new connections that it’s not even clear how these programs discover the patterns.

A deep learning neural network is like a black box swirling together computation and data to determine what it means to be a cat. No human knows how the network arrives at its decision. Is it the whiskers? Is it the ears? Or is it something about all cats that we humans are unable to see? In a sense, the deep learning network creates its own model for what it means to be a cat, a model that as of right now humans can only copy or read, but not understand or interpret.

In 2012, Google’s DeepMind project did just that. Developers fed 10 million random images from YouTube videos into a network that had over 1 billion neural connections running on 16,000 processors. They didn’t label any of the data. So the network didn’t know what it meant to be a cat, human or a car. Instead the network just looked through the images and came up with its own clusters. It found that many of the videos contained a very similar cluster. To the network this cluster looked like this.

A “cat” from “Building high-level features using large scale unsupervised learning

Now as a human you might recognize this as the face of a cat. To the neural network this was just a very common something that it saw in many of the videos. In a sense it invented its own interpretation of a cat. A human might go through and tell the network that this is a cat, but this isn’t necessary for the network to find cats in these videos. In fact the network was able to identify a “cat” 74.8% of the time. In a nod to Alan Turing, the Cato Institute’s Julian Sanchez called this the “Purring Test.”

If you decide to start working with AI, accept the fact that your network might be sensing things that humans are unable to perceive. Artificial intelligence is not the same as human intelligence, and even though we may reach the same conclusions, we’re definitely not going through the same process.

To understand the concept of artificial intelligence, we must first start by defining intelligence. According to the dictionary definition, intelligence is a "capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude for grasping truths, relationships, facts, meanings, etc." This definition is broad enough to cover both human and computer (artificial) intelligence. Both people and computers can learn, reason, understand relationships, distinguish facts from falsehoods and so forth.

However, some definitions of intelligence raise the bar to include consciousness or self-awareness, wisdom, emotion, sympathy, intuition and creativity. In some definitions, intelligence also involves spirituality — a connection to a greater force or being. These definitions separate natural, human intelligence from artificial intelligence, at least in the current, real world. In science fiction, in futuristic worlds, artificially intelligent computers and robots often make the leap to self-consciousness and self-determination, which leads to conflict with their human creators. In The Terminator, artificial intelligence leads to all-out war between humans and the intelligent machines they created.

Other Challenges in Defining Intelligence

A further challenge to our ability to define “intelligence” is the fact that human intelligence comes in many forms and often includes the element of creativity. While computers can be proficient at math, repetitive tasks, playing certain games (such as chess), and anything else a human being can program them to do (or to learn to do), people excel in a variety of fields, including math, science, art, music, politics, business, medicine, law, linguistics and so on.

Another challenge to defining intelligence is that we have no definitive standard for measuring it. We do have intelligent quotient (IQ) tests, but a typical IQ test evaluates only short-term memory, analytical thinking, mathematical ability and spatial recognition. In high school, we take ACTs and SATs to gauge our mastery of what we should have learned in school, but the results from those tests don't always reflect a person's true intelligence. In addition, while some people excel in academics, others are skilled in trades or have a higher level of emotional competence or spirituality. There are also people who fail in school but still manage to excel in business, politics, or their chosen careers.

Without a reliable standard for measuring human intelligence, it’s very difficult to point to a computer and say that it's behaving intelligently. Computers are certainly very good at performing certain tasks and may do so much better and faster than humans, but does that make them intelligent? For example, computers have been able to beat humans in chess for decades. IBM Watson beat some of the best champions in the game show Jeopardy. Google's DeepMind has beaten the best players in the 2,500-year-old Chinese game called “Go” — a game so complex that there are thought to be more possible configurations of the board than there are atoms in the universe. Yet none of these computers understands the purpose of a game or has a desire to play.

Expertise in Pattern-Matching

As impressive as these accomplishments are, they are still just a product of a computer’s special talent for pattern-matching. Pattern-matching is what happens when a computer extracts information from its database and uses that information to answer a question or perform a task. This seems to be intelligent behavior only because a computer is excellent at that particular task. However, excellence at performing a specific task is not necessarily a reflection of intelligence in a human sense. Just because a computer can beat a chess master does not mean that the computer is more intelligent. We generally don't measure a machine's capability in human terms—for example, we don't describe a boat as swimming faster than a human or a hydraulic jack as being stronger than a weightlifter—so it makes little sense to describe a computer as being smarter or more intelligent just because it is better at performing a specific task.

A computer's proficiency at pattern-matching can make it appear to be intelligent in a human sense. For example, computers often beat humans at games traditionally associated with intelligence. But games are the perfect environments for computers to mimic human intelligence through pattern-matching. Every game has specific rules with a certain number of possibilities that can be stored in a database. When IBM's Watson played Jeopardy, all it needed to do was use natural language processing (NLP) to understand the question, buzz in faster than the other contestants, and apply pattern-matching to find the correct answer in its database.

Good Old-Fashioned Artificial Intelligence (GOFAI)

Early AI developers knew that computers had the potential to excel in a world of fixed rules and possibilities. Only a few years after the first AI conference, developers had their first version of a chess program. The program could match an opponent’s move with thousands of possible counter moves and play out thousands of games to determine the potential ramifications of making a move before deciding which piece to move and where to move it, and it could do so in a matter of seconds.

Artificial intelligence is always more impressive when computers are on their home turf — when the rules are clear and the possibilities limited. Organizations that benefit most from AI are those that work within a well-defined space with set rules, so it’s no surprise that organizations like Google fully embrace AI. Google’s entire business involves pattern-matching — matching users’ questions with a massive database of answers. AI experts often refer to this as good old-fashioned artificial intelligence (GOFAI).

If you're thinking about incorporating AI in your business, consider what computers are really good at — pattern-matching. Do you have a lot of pattern-matching in your organization? Does a lot of your work have set rules and possibilities? It will be this work that is first to benefit from AI.

9450 SW Gemini Drive #32865
Beaverton, Oregon, 97008-7105
Dark logo
© 2022 Doug Enterprises, LLC All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram