You’ve seen in my last post that John Searle created the Chinese room argument to explain how pattern matching shouldn’t be seen as intelligence. He also pointed out that you can think of AI in two different ways. He called them strong and weak artificial intelligence.
Most AI experts believe that we’re just starting down the path of weak AI — using AI to answer factual questions, provide directions, manage our schedules, make recommendations based on our past choices and reactions, help us do our taxes, prevent online fraud and so on. Many organizations already use weak AI to help with narrow tasks, such as these. Strong AI is still relegated to the world of science fiction.
You can witness weak AI at work in the latest generation of personal assistants, including Apple’s Siri and Microsoft’s Cortana. You can talk to them and even ask them questions. They convert spoken language into machine language and use pattern matching to answer your questions and respond to your requests. That’s not much different from traditional interactions with search engines such as Google and Bing. The difference is that Siri and Cortana behave more like human beings; they can talk. They can even book a reservation at your favorite restaurant and place calls for you.
These personal assistants don’t have general artificial intelligence (if they did then they’d certainly get sick of listening to your daily requests). Instead, they focus on a narrow task of listening to your input and matching it to their database.
John Searle was quick to point out that any symbolic AI should be considered weak AI. However, in the 1970s and 80s, symbolic systems were used to create artificial intelligence software that could make expert decisions. These were commonly called expert systems.
In an expert system, people who specialize in a given field input the patterns that the computer can match to arrive at a given conclusion. For example, in medicine, a doctor may input groupings of symptoms that match up with various diagnoses. A nurse inputs the patient’s symptoms into the computer. The computer can then search its database for a matching diagnosis and present the most likely diagnosis to the patient. For example, if a patient has a cough, shortness of breath and a slight fever, the computer may conclude that the patient probably has bronchitis. To the patient, the computer may seem to be as intelligent as a doctor, but in reality all the computer is doing is matching symptoms to possible diagnoses.
Expert systems run into the same problems as other symbolic systems; they ultimately experience combinatorial explosions. There are simply too many symptoms, diagnoses and variables to consider when trying to diagnose an illness. Just think about all the steps a doctor must take to arrive at an accurate diagnosis — conducting a physical exam, interviewing the patient, ordering lab tests and sometimes ruling out a long list of other illnesses that have similar symptoms. Imagine all the possible ways a patient could answer each question the doctor asks and all the various combinations of lab results.
These early expert systems also had a serious limitation — the real possibility that given certain input, the system would be unable to find a match. You have probably experienced this on various websites; you input your search phrase, and the site informs you that it found no match.
Even with these drawbacks, the symbolic approach was a key starting point for artificial intelligence and is still in use today, typically with some modifications (as you’ll see in the next post).