Reference Series Table of Contents For This Issue

How Computers Work, Part I
August 2001• Vol.5 Issue 3
Page(s) 174-179 in print issue

Artificial Intelligence & Expert Systems
AI Is Not Yet Programmed To Love
Most people immediately think about interactive robots and computers with the ability to act almost human when they hear the term “artificial intelligence.” If you’ve ever watched a science fiction movie, you’ve probably developed an idea about how artificial intelligence, or AI, is supposed to look, act, and feel. From the HAL 9000 interactive computer in “2001: A Space Odyssey” to C3PO in “Star Wars” and the characters in Steven Spielberg’s summer 2001 movie “A.I.,” interactive computers and robots are a staple of science fiction.

If such films are your only knowledge of AI, you might consider current AI discoveries a colossal failure. Sure, there was the IBM computer that beat a human world chess champion a few years ago, but you’re still waiting for the Jetsons-like Rosey the robot maid in your home. If so, you’re going to be sorely disappointed for several more years. Just because we don’t yet have talking, cooking, and cleaning robots doesn’t mean the concept of AI is a failure, though.

In fact, discoveries made by AI scientists in the past are probably a part of your everyday life, and you might not even realize it. AI continues to evolve and expand, tackling problems in a variety of areas. Today’s AI work may not lead to an interactive robot appearing in your kitchen tomorrow, but it is working in that direction . . . along with plenty of other directions.



  AI Defined. If you’re only interested in the short definition for AI, here it is: It’s the process of studying how humans reason, perceive, act, and think and then transferring those capabilities to a computer in an effort to simulate human actions. Webster’s defines AI as “the capability of a machine to imitate intelligent human behavior.” If you prefer the long version for the definition of AI, sit back and relax.

When we asked scientists working in the field of AI to provide us with a definition, many of them said such a request wasn’t possible because AI is constantly evolving. For example, what many scientists defined as AI in the 1970s is no longer the same today. In 20 more years, the definition of AI almost certainly will be different than it is today.

“One of the problems with AI is that it always focuses on the edge of what can be done with computer science and computers at the moment,” Dr. Joe Bigus, senior technical staff member at IBM’s T.J. Watson Research Center, told Smart Computing in a recent interview. “It changes because computers are so much faster and the problems that can be solved change. Once the problems are tractable, once people have algorithms and the computing resources to solve the problems, then it’s no longer AI.”

In his book, “Artificial Intelligence,” scientist Patrick Henry Winston says AI isn’t the same as psychology because of the emphasis on computation, nor is it the same as computer science because of the emphasis on reasoning. Winston says engineers and scientists usually view AI in different terms: An engineer is trying to solve real-world problems using AI, while scientists are trying to use AI to explain intelligence and represent knowledge.

AI seeks to mimic human intelligence through a variety of means. AI technological advances have led to developments in computer voice recognition, image recognition, environment simulation, and document translation, among other areas.



  Expert Systems. One major branch of AI is expert systems, which are a combination of databases of knowledge and an inference engine, which usually consists of a series of if-then statements or of rule-based statements, that a computer uses to attempt to solve a problem. These databases typically are extremely large, attempting to cover every potential situation a user could encounter.

“What we’re trying to do is capture someone’s expertise in a computer program,” says Dr. Ian Morrison, vice president of Acquired Intelligence, a creator of expert systems. “Expertise is something that would be a skill or knowledge . . . that you don’t have direct access to.”

When a company is creating an expert system, it usually enlists the help of one or more of the top experts in the field. The expert then provides his knowledge for the computer database. The company creating the expert system applies a series of if-then statements or rules to the expert’s knowledge, giving the expert system the appearance of reasoning. Finally, when a company places its expert system in use, it usually asks the user a series of questions, attempting to narrow down the potential answers until the system can make an educated guess at a final answer. The expert system will ask questions to try to steer the user to find patterns that the expert system can apply to its database of knowledge. The expert system attempts to follow the same procedures and to ask the same questions the actual human expert would follow to develop an answer to a query.

For example, if you created an expert system to predict weather, the expert system would attempt to follow the same reasoning pattern a human expert would follow. The expert system might ask the user for a variety of weather information, such as location and movement of low-pressure systems or barometer readings. The expert system then would apply that information to its hierarchy of rules to narrow the possible weather predictions down to the best possible answer. For instance, if the user entered data indicating a warm air mass and a cold front were going to collide, the expert system then would deduce a frontal thunderstorm might occur. Just like a human expert, the expert system sometimes will be unable to develop a conclusion, possibly because the database doesn’t have the required knowledge or because the user isn’t providing enough details in the patterns of data he’s feeding the expert system.

Morrison says many companies employ Acquired Intelligence to help them tap into the knowledge of their own experts. Those companies may want to guard against valuable employees leaving the company and taking unique knowledge with them. Alternatively, companies may wish to prepare for the retirement of an especially valuable employee. The companies sometimes simply want to give the expert a chance to deal with research rather than requests from fellow employees.

“With only one person who has the expertise, it also can be problematic for many people to try and access him,” Morrison says. “[The expert system] frees up the expert from answering the same simple questions over and over.”

Obviously, the thought processes one human expert uses to reach a conclusion may not be the same as another human expert in the same field. Each expert system is going to reflect individual traits of the specialist who creates it, too. Therefore, two expert systems within the same industry field might not reach the same conclusion when presented with the exact same set of data; each expert system reflects the characteristics of the humans who helped create it. Some AI scientists don’t consider expert systems a form of cutting edge AI anymore because expert systems are commonplace in so many different industries. This belief goes back to the thought that the definition of AI is an ever-changing one.



  Some History.


IBM's Deep Blue computer scored an unexpected upset of world chess champion Garry Kasparov in 1997 and showed the potential power of AI technology.
Most scientists credit Alan Turing with creating the idea of AI. Turing, an English mathematician, introduced the idea of the Turing Test in a 1950 paper. The Turing Test is an attempt to determine whether a computer could simulate a human in typed, text-based conversation to the point where the recipient of the messages couldn’t tell if a computer or a human had generated the responses. Turing designed the test as a method of determining whether a computer was intelligent. Other AI scientists expanded the idea of the Turing Test to yield the branch of science labeled AI. Whereas one of the original aims of AI scientists was to create artificial learning and to give computers the ability to reason on their own, today’s reality shows that such capabilities are still decades away, if they’re possible at all.

“The purpose of AI is to enhance the usability of computers,” Bigus told Smart Computing. “It’s a tool for people. . . . People are still superior, mainly because we’re so adaptable and flexible.”

Algorithms are the basis for many forms of AI. (An algorithm is a step-by-step mathematical procedure used to solve a problem.) In an AI process called machine learning, an application would change its complex algorithms, based on new information it acquires.



  Uses For AI. Scientists and engineers are considering hundreds, if not thousands, of uses for AI now and in the future. Here are a few ideas currently in development or in use. (You might be surprised at how often AI-developed technologies affect you today.)

Document translation. Many companies offer customer service options through their Web sites, giving customers a chance to type a message in a Web form. Companies hope by using Web forms they can cut down on the amount of time customer service members must spend dealing with phone requests. However, typewritten requests can begin to occupy a large amount of customer service time, too.

IBM’s Marshall Schor, who is a senior technical staff member and senior manager for knowledge systems in the Mathematical Sciences Dept. of the T. J. Watson Research Division, says IBM is using AI technology to develop software that can translate a written document properly. This seems like a complex task at first glance, made even more difficult by the different phrases and sentences people might create to convey a similar message. Such language abnormalities require an enormous database of language usage possibilities to give the computer the necessary information to make an accurate translation of the document.

“Machine learning is a bunch of algorithms, actually highly mathematical approaches,” Schor says. “We’ve been applying it to text and trying to learn what a document is about.”

The document translation software would need to decipher the meaning within a message, thereby allowing it to route the message to the correct department in the company for the proper customer service response. For some simple requests, the software would need to decipher the message and then generate the appropriate response itself, leaving humans completely out of the process.

To perform this task, the software would need to compare each incoming message to a preset list of potential message categories, and then try to determine one or a few of those categories under which the message could fit. Not only does the software need to find the correct category, but it also might need to route a message to more than one department. For instance, a bank customer might ask for information on a car loan and on a savings account in the same message.

The software can apply the powerful algorithms to the phrases and sentences, helping to make accurate predictions on the possibility of the message’s intent, thereby allowing the software to route it to the correct department. As the software routes messages, the software might be able to adjust the algorithms to incorporate new occurrences, applying what it discovered earlier to new messages.

Cycorp is building a knowledge base of everyday facts that it hopes to use in the building of AI applications. Using this knowledge base, a computer might be able to read a document and provide an answer to a question or problem, based on the facts it already knows. In some instances, a computer could read a document and then determine some new facts from it that the computer can automatically add to the knowledge base.



At the Web site for Acquired Intelligence (check it out at http://www.aiinc.ca/demos), users can try demonstration versions of simple expert systems, one of which can help you identify a type of whale.
Gaming. One of the areas where AI made some of its biggest headlines was in the area of gaming after an IBM computer called Deep Blue defeated reigning world chess champion Garry Kasparov in May 1997. Many of today’s most sophisticated computer games, such as The Sims (http://thesims.ea.com) and Black & White (http://www.bwgame .com), use AI to help them adapt to the human player’s level of expertise or to change the game’s behaviour patterns during play. (If only Pac Man had been able to use AI.) These adaptations challenge the human player and make the game more enjoyable for a longer period.

Language translation. Web designers write Web sites in multiple languages in order to appeal to the global reach of the World Wide Web and the Internet. If you find a cool Web site written in German, though, it’s not going to help you much if you only can read English. Language-translation software, with a background in AI technology, can translate a foreign language Web site into a language you can understand.

Mimicking emotion. AI scientists are working on affective computing, which would help computers simulate human emotions. A computer using such technology might sense the emotions of the user and react to them itself, or it might deliver news or messages with the appropriate emotion in place. Obviously, humans often make choices and react to situations based on their current emotional state. AI computers trying to simulate human traits would need to be able to react to such human emotional changes to seem realistic.

Personal assistant. Sprint’s ATL (Advanced Technology Lab) is developing an interactive personal assistant, called a virtual agent, to help users manage many aspects of their personal lives (see accompanying infographic). The virtual agent, which Sprint could employ in the next three to five years, would take advantage of higher bandwidth capabilities for communications networks to manage telephone calls, e-mail messages, and dozens of other forms of communication.

“As you get more and more capacity and higher bandwidth connections, what do you have?” says Mike O’Brien, manager of the service architecture group at the ATL. “If you just get faster Web pages, you’re going to start getting diminishing returns. . . . If you [use the bandwidth to] actually generate a system that’s context aware, you can start interfacing with people. When I walk into the room, it has face recognition. It says, ‘Hi, Mike.’ It knows to interact with me differently than with someone else.”

The virtual agent will use AI technology to adjust to the changing needs of the user, too. As you make changes in your life, the virtual agent will change your personal profile automatically.

Product personalization. Have you ever wondered how a Web site seems to know your personal purchasing tendencies each time you visit, targeting its advertisements or site links to you? AI technology might be behind this type of specialized, personalized data. The software uses information in your personal database stored at the Web site to target you with product teaser links and ads. As you make purchases or click on certain ads, the software shows its AI tendencies by learning the new information and changing your personal profile automatically.



Through the Cyc project (http://www.cyc.com), Cycorp is attempting to build an enormous database of common-sense facts that scientists can use in AI applications.
Scheduling. AI technology has become an important cog in software used to create complex schedules, Schor says. In one example, a home repair and appliance installation service organization uses a large database containing a wide variety of information to best schedule technicians. Once a customer calls the service center and logs a service or installation request, the scheduling software searches through several databases to find the best technician for the job, depending on the technician’s location, availability, inventory of parts, and expertise.

Targeted marketing. Schor says IBM has updated an older AI technology used to plot points on graphs, allowing it to provide companies with improved targeted marketing. Companies that ship catalogs could use this technology to determine the customers to whom they should mail magazines. The AI technology also would help companies determine how often they should mail magazines to readers.

“The companies know if they don’t mail any catalogs, they don’t have any orders, but the cost can be high per catalog,” Schor says. “The companies wanted to develop a model that predicts the likelihood of a response from each [catalog recipient].”

Voice recognition. Voice-recognition technology, while it still isn’t perfect, continues to make great strides. Voice-recognition technology has its roots in AI, and developers are employing the technology in several different arenas.

Schor says one example is an employee phone number search in a company database. IBM, for example, lets employees search for other employees’ phone numbers by speaking the desired employee’s name into the telephone. The employee database then uses AI technology and voice-recognition technology to find the desired employee and return the phone number. The voice-recognition technology employs statistical patterns and probability analysis to determine the name or names that most closely match the request. This technique usually is much faster than trying to spell an employee’s name using the keypad on the telephone receiver.

Working together. This form of AI, called distributed computing, allows several computers to work together to solve a problem, similar to a team of scientists working together. Within the team of scientists, you would see individuals who have specific abilities, allowing them to contribute a unique talent to the team. A team of AI computers working together might have the same types of unique specialized abilities, enhancing the team as a whole.



  The Future Is Rosey. Having AI experts make predictions about the future direction of AI is about as difficult as pinning them down on an exact definition for AI. However, most of them agree that AI research eventually will yield some amazing discoveries.

The idea of a HAL-type computer, who can react to your personal needs and manage several aspects of your life, might be available before the end of the decade. Development of interactive, realistic gaming environments using virtual reality hardware and AI-based software might be only several years away, too. If you’re waiting for a Rosey-type robotic maid, you might have to wait a few decades, but development of such sophisticated robotics is a strong possibility. At some point in the next half-century, AI technology could offer especially strong contributions in the field of human genetics, allowing doctors to make almost unfathomable discoveries about human health and curing disease.

The final frontier, though, may be building a computer that actually can learn from, reason with, and react to everyday situations just like a human. While such capabilities likely are at least several decades away from reality, they certainly aren’t impossible: Just think about all of the AI technologies that seemed cutting-edge 30 years ago that now are commonly used in several software packages. When it comes to AI technological advances, what appeared at one time to be science fiction seems to continually evolve into science fact.  

by Kyle Schurman

View the graphics that accompany this article.
(NOTE: These pages are PDF (Portable Document Format) files. You will need Adobe Acrobat Reader to view these pages. Download Adobe Acrobat Reader)


Peeking Into The Future

In his book, “Artificial Intelligence,” scientist Patrick Henry Winston discusses some potential uses now and in the future for AI, including:

Education.
Computers featuring AI could help teachers understand why a student made a mistake rather than just noting the mistake. An alternative would be to create virtual worlds to meet each student’s personal level of understanding.

Engineering. Computers featuring AI could help engineers test designs more efficiently as well as identify future risks and learn from past mistakes while creating designs.

Farming. Using robotics and AI, farmers could program robots to take care of pest control and some aspects of harvesting automatically.

Health care. Expert systems will improve diagnosis techniques, aiding doctors. Robots could help with a variety of tasks, such as monitoring patients’ conditions and making beds.

Household. Robots and computers featuring AI could help with food selection, planning, and preparation; household maintenance; lawn mowing; and other basic chores.


Timeline For AI

AI (artificial intelligence) has branched in many directions since its birth in the 1950s. Here’s a timeline of some of the key events that have shaped AI.

1950s

  • IBM’s Arthur Samuel develops a checkers-playing program.

  • John McCarthy of MIT (Massachusetts Institute of Technology) created the term artificial intelligence in 1956.

1960s

  • DENDRAL, the first expert system, comes into existence.

  • Shakey, a robot that combined locomotion, perception, and problem solving, was created.

1970s
  • Stanford University scientists developed MYCIN, the first rule-based expert system, which helped doctors identify causes of bacterial infection.
  • Another medical expert system, INTERNIST, tapped the knowledge of Dr. Jack Myers to help other doctors in diagnosis.

1980s

  • The first machines designed to run LISP, a commonly used language for writing AI research programs, were introduced.

  • CLIPS, an expert systems tool written in the C language, was developed by NASA.

1990s
  • Many AI technologies begin to appear in mainstream software more frequently; users might not even realize they’re using an AI technology.

  • IBM’s Deep Blue computer defeated the world’s human chess champion.





Want more information about a topic you found of interest while reading this article? Type a word or phrase that identifies the topic and click "Search" to find relevant articles from within our editorial database.




© Copyright by Sandhills Publishing Company 2001. All rights reserved.