Stanford University News Service
425 Santa Teresa Street
Stanford, California 94306-2245
Tel: (650) 723-2558
Fax: 650) 725-0247
June 20, 2005
Dawn Levy, News Service: (650) 725-1944, firstname.lastname@example.org
From mahjong to Monopoly, bridge to Bingo, Sorry to Scrabble—games are serious fun. And with their diverse rules, they're also the perfect tools for exploring concepts in artificial intelligence (AI) and new approaches to programming, say Stanford computer scientists.
"Programs that think better should be able to win more games," wrote Michael Genesereth, computer science professor with the Stanford Logic Group, and Nathaniel Love, a computer science doctoral student, in an article on general game playing (GGP) to be published in the summer 2005 issue of AI Magazine.
The concept of general game playing is "drastically different," Genesereth said, from the computer programming done in the past to create programs like IBM's Deep Blue, which beat world chess champion Gary Kasparov in 1997.
"The computer just follows a recipe that has been given to it," Genesereth said of Deep Blue's highly specialized program. The applications for AI are limited in this case because the computer never has to "think" for itself. A program like Deep Blue demonstrates "the smarts of the programmer rather than the smarts of the program," said Genesereth. In AI research, what is important is the intelligence of the program itself.
General game playing requires that the computer be able to learn and understand rules, something that Deep Blue cannot do. Writing a program for GGP is like "trying to teach a child how to play a game" with the complication that "you would essentially be guaranteed that the game he will be asked to play is a game that neither you nor he has ever seen before," said Daniel Tarlow, an undergraduate who took Genesereth's course (CS227B) on GGP this spring. The final project for the course was developing a computer program capable of playing any game if given the rules. At the end of the term, the programs competed against each other in a tournament.
Competitions between GGP programs are an "evaluation technique for intelligent systems," said Genesereth. By playing the programs against each other, it is possible to compare the relative intelligence of each system.
The students discovered that simpler systems were often more reliable. "My team ran into the pitfall of trying to make things too complex," said Tarlow. "In the end, this led us to write code that behaved well in some situations, but not in all situations." Love noted that many of the programs capable of deeper reasoning were less stable in the competition setting. The winning team—Adam Reichert, Joseph Baker-Malone and Yi-Lang Mok, a.k.a. "The Pirates"—built a very reliable and robust game player that managed to correctly understand each game that was given to it, Love said.Early programmers envision R2D2-like prototype
Programs designed for general game playing exemplify a malleable and comprehensive type of system that harkens back to the early days of computer science theory, said Genesereth. When the idea of computers was first being developed in the 1950s, early programmers envisioned machines capable of synthesizing an array of different inputs to reach an independent decision, said Genesereth. The idea was for computers to be much more "autonomous" than they currently are. It soon became clear that a system capable of synthesis would be much more complicated to design than one dependent on individual programs with specific functions, Genesereth said.
The work being done by Genesereth, Love and others pioneering GGP is a revitalization of this early theory. It also applies directly to AI research. Whereas today's computers can process information, GGP programs go beyond that to adapt to novel situations, said Genesereth, whose work was inspired by weekly board game sessions with friends. "The best part was sitting down with a game you had never played before and having to figure out how it worked," Genesereth said. This ability to receive and comprehend new rules and then adapt to them differentiates human intelligence from that of a computer. No computer program is currently able to do this.'Hodge-podge' stumps HAL
One of Genesereth's favorite games to illustrate the differences between human intelligence and computer intelligence is called "Hodge-podge." The game is really three separate games all running at the same time: chess, checkers and tic-tac-toe. Computers are miserable at the game, because they are unable to separate the consequence of a move in checkers from that of a move in chess, said Genesereth. The computer searches for the move that will be the best for all three games at once, while a human would be able to recognize that the games were separate and that a move in tic-tac-toe would not have an effect on the checkers game. The challenge for programmers is to create a system to allow the computer to "think" on multiple levels all at once.
But the research is not just about games. The philosophy underneath GGP—that a computer program should be able to adapt with different information and make independent decisions—has wide application. Business management would be one sector that Genesereth thinks would especially benefit from this revolution in programming. A program that could automatically adjust to a new rule or regulation and make independent decisions based on those rules would have wide applicability for businesses in assessing their compatibility with specific laws. Currently, whenever a law changes, firms must recruit programmers to redesign their systems to accommodate the change in regulation. But an "intelligent" computer could simply be "told" the rule and adapt itself, said Genesereth.
To encourage more work on GGP in the AI community, the Stanford group will be hosting a GGP competition at this year's American Association for Artificial Intelligence conference in Pittsburgh, Pa., July 9-13.
"The nature of intelligence is synthesizing a wide array of information and making a decision," said Genesereth. General game playing provides the training wheels for both the programs and the programmers. Real artificial intelligence on a level with humans, the researchers avidly believe, is no longer just science fiction.
Kendall is a science-writing intern at the Stanford News Service.
Michael Genesereth, Computer Science: (650) 723-0934, email@example.com
Science-writing intern Kendall Madden wrote this release.
Email firstname.lastname@example.org or phone (650) 723-2558.