Dark logo

Fueling the rise of machine learning and deep learning is the availability of massive amounts of data, often referred to as big data. If you wanted to create an AI program to identify pictures of cats, you could access millions of cat images online. The same is true, or more true, of other types of data. Various organizations have access to vast amounts of data, including charge card transactions, user behaviors on websites, data from online games, published medical studies, satellite images, online maps, census reports, voter records, economic data and machine-generated data (from machines equipped with sensors that report the status of their operation and any problems they detect). So what is the relationship between AI and big data?

This treasure trove of data has given machine learning a huge advantage over symbolic systems. Having a neural network chew on gigabytes of data and report on it is much easier and quicker than having an expert identify and input patterns and reasoning schemas to enable the computer to deliver accurate responses.

In some ways the evolution of machine learning is similar to how online search engines evolved. Early on, users would consult website directories such as Yahoo! to find what they were looking for — directories that were created and maintained by humans. Website owners would submit their sites to Yahoo! and suggest the categories in which to place them. Yahoo! personnel would then vet the sites and add them to the directory or deny the request. The process was time-consuming and labor-intensive, but it worked well when the web had relatively few websites. When the thousands of websites proliferated into millions and then crossed the one billion threshold, the system broke down fairly quickly. Human beings couldn’t work quickly enough to keep the Yahoo! directories current.

In the mid-1990s Yahoo! partnered with a smaller company called Google that had developed a search engine to locate and categorize web pages. Google’s first search engine examined backlinks (pages that linked to a given page) to determine the relevance and authority of the given page and rank it accordingly in its search results. Since then, Google has developed additional algorithms to determine a page’s rank (or relevance); for example, the more users who enter the same search phrase and click the same link, the higher the ranking that page receives. This approach is similar to the way neurons in an artificial neural network strengthen their connections.

The fact that Google is one of the companies most enthusiastic about AI is no coincidence. The entire business has been built on using machines to interpret massive amounts of data. Rosenblatt's preceptrons could look through only a couple grainy images. Now we have processors that are at least a million times faster sorting through massive amounts of data to find content that’s most likely to be relevant to whatever a user searches for.

Deep learning architecture adds even more power, enabling machines to identify patterns in data that just a few decades ago would have been nearly imperceptible. With more layers in the neural network, it can perceive details that would go unnoticed by most humans. These deep learning artificial networks look at so much data and create so many new connections that it’s not even clear how these programs discover the patterns.

A deep learning neural network is like a black box swirling together computation and data to determine what it means to be a cat. No human knows how the network arrives at its decision. Is it the whiskers? Is it the ears? Or is it something about all cats that we humans are unable to see? In a sense, the deep learning network creates its own model for what it means to be a cat, a model that as of right now humans can only copy or read, but not understand or interpret.

In 2012, Google’s DeepMind project did just that. Developers fed 10 million random images from YouTube videos into a network that had over 1 billion neural connections running on 16,000 processors. They didn’t label any of the data. So the network didn’t know what it meant to be a cat, human or a car. Instead the network just looked through the images and came up with its own clusters. It found that many of the videos contained a very similar cluster. To the network this cluster looked like this.

A “cat” from “Building high-level features using large scale unsupervised learning

Now as a human you might recognize this as the face of a cat. To the neural network this was just a very common something that it saw in many of the videos. In a sense it invented its own interpretation of a cat. A human might go through and tell the network that this is a cat, but this isn’t necessary for the network to find cats in these videos. In fact the network was able to identify a “cat” 74.8% of the time. In a nod to Alan Turing, the Cato Institute’s Julian Sanchez called this the “Purring Test.”

If you decide to start working with AI, accept the fact that your network might be sensing things that humans are unable to perceive. Artificial intelligence is not the same as human intelligence, and even though we may reach the same conclusions, we’re definitely not going through the same process.

The Imitation Game is a 2015 movie based on the biography of Alan Turing, a Cambridge and Princeton graduate. In 1939, Turing was recruited by the newly created British intelligence agency MI6 to decipher Nazi codes, including Enigma. The Polish had broken the Enigma code before the war, but the Nazis increased the complexity of their Enigma machines, so there were approximately 10114 permutations to the code. At the time, the British code-breaking operation involved 12,000 people who covered three shifts that operated 24/7. Turing and his team built an electromechanical machine called the Bombe that searched through the permutations of the Enigma code to find the one that was used for each message. Using the Bombe, the British were able to read all of the German navy's encrypted messages.

While working toward his Ph.D. at Princeton, Turing published a paper entitled "On Computable Numbers with an application to the Entscheidungs problem," in which he envisioned a single, universal machine that could solve any problem by following instructions that could be encoded on a paper tape. For example, given one set of instructions, the machine might be able to calculate square roots. Given another set of instructions, it could solve crossword puzzles. Although others have been credited with inventing the first computer, Turing's ideas gave birth to the field of computer science, specifically computer programming.

Testing a Machine to Determine Whether It Is Intelligent

In a 1951 paper, Turing proposed a test for intelligence called the “imitation game,” which is based on a Victorian parlor game. The game involves three players—Player A is a man, Player B is a woman, and Player C is a man or woman who acts as the interrogator. Player C cannot see Players A or B and can communicate with them only through written messages. Player C writes down questions that are passed to Player A or B and receives written answers back from them. Based on the answers, Player C must determine which player (A or B) is the man and which is the woman. Player A's job is to trick Player C into making the wrong choice, while player B attempts to assist Player C in making the right choice.

Turing imagined an updated version of the imitation game in which Player A is replaced by a machine. If the machine were just as effective as a human player in fooling Player C, Turing deemed this proof of (artificial) intelligence. The imitation game later came to be referred to as the "Turing test."

This test sparked a lot of curiosity in the possibility of an "intelligent machine"—one that could accomplish a specific task in the presence of uncertainty and variations in its environment. For a machine to be considered intelligent, it must be able to monitor its environment and make adjustments based on its observations. In the case of the Turing test, the machine would need to be able to "understand" its role in the game (to fool Player C) and its gender (male) and be able to choose responses to unanticipated questions in a way that would confuse Player C.

Even after nearly 70 years, this test is still intriguing and a considerable challenge for computer developers. You can witness a version of this today by interacting with smartphones and artificially intelligent virtual assistants, like Siri and Alexa, whose answers to questions and responses to directives are often comical at best.

Turing Test Limitations

Most experts agree that the Turing test is not necessarily the best way to gauge intelligence. For one it depends a lot on the interrogator—some people are easily fooled. It also assumes that artificial intelligence is like human intelligence and that computers have mastered verbal communication, when that is far from the truth; computers often misinterpret words and phrases. If a computer cannot carry on an intelligent conversation, then how can we expect it to perform higher level tasks that require the ability to accurately interpret verbal and non-verbal communication, such as accurately diagnosing an illness?

A Test That Continues to Drive Innovation

The Turing test still inspires a lot of innovation. Companies continue to try to create intelligent chatbots, for example, and there are still natural language processing (NLP) competitions that attempt to pass the test. Indeed, it seems as though modern machines are only a few years away from passing the Turing test. Many modern NLP applications are able to accurately understand many of your requests. Now they just have to improve their ability to respond.

Yet even if a machine is able to pass the Turing test, it still seems unlikely that that same machine would qualify as intelligent. Even if your smartphone is able to trick you into thinking you’re talking to a human, that doesn’t mean that it will offer meaningful conversation or care about what you think or feel.

9450 SW Gemini Drive #32865
Beaverton, Oregon, 97008-7105
Dark logo
© 2022 Doug Enterprises, LLC All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram