FRIENDLY, COMPETITION

 

An Investigation Into Optimization Strategies of Genetic Algorithms and Swarm Intelligence

 

 

 

 

 

Ahmad S. Khalil

 

 

 

 

 

 

 

 

 

 

 

 

STS 129: Artificial Life

Professor Michael John Gorman

14 November 2001


When John Von Neumann gave birth to self-reproducing automaton in his imagination, he also gave birth to modern-day artificial life techniques that are founded on an amalgam of biological and digital knowledge. Specifically, Von Neumann recognized that, like human beings, nature is also a problem solver. In fact, she has been adapting biological processes to solve problems for billions of years. As a result of his thought experiments regarding self-reproducing automata, Von Neumann became one of the integral founders of a promising field of artificial life that views biological phenomena as the key to unraveling highly complex, optimization problems. Genetic algorithms and Swarm Intelligence are two examples of this artificial life strategy that emulates living systems in an attempt to produce computer programs able to learn. Although it is common to group genetic algorithms and swarm intelligence together because of their ties to evolutionary, bottom-up hierarchical methods of optimization, they are, in actuality, well-suited for different, optimization problems: genetic algorithms excel in game theory due to their competitive nature while swarm intelligence shines in combinatorial optimization due to its cooperative nature.

 

GENES, SWARMS, AND A-LIFE: BIOLOGICALLY BASED

            Genetic algorithms and swarm intelligence both possess biological plausibility and, in particular, are concerned with simulating the biological progression of evolution. Their eventual goal is to achieve the status that the natural world has reached—a condition in which complex organisms are tremendously diversified and have myriads of useful traits.  “Imagine the organisms of today’s world as being the results of many iterations in a grand optimization algorithm” (Haupt 19). In this quote, an optimization algorithm replaces evolution as the force driving organisms to adapt. To genetic algorithms or swarm intelligence programs, the organism is simply a potential solution to the problem at hand. Therefore, as nature has evolved her organisms into their present optimized state, these artificial life strategies can similarly evolve their possible solutions into correct solutions.

            Genetic algorithms are simulations of evolution at the cellular level. Their basis is the interplay between genetics and evolution that Charles Darwin uncovered in his theory of natural selection. In the biological world, genes convey an organism’s hereditary information and are carried by chromosomes in the form of genetic codes—DNA. These genetic codes determine the characteristics of an organism, for example, the color of its eyes. In his theory of natural selection, Darwin concluded that an organism’s survivability is dependent on the characteristics it inherits from its parents and, further, that organisms “most fit” to survive in an environment will be chosen to breed and pass on their genes. As Sherry Turkle explains in her book Life on the Screen, Darwin’s revelations and the subsequent work of molecular biologists “clearly [communicate] that what stands behind life is not a metaphysical entity but DNA, whose code begins to be seen as life itself.”  Moreover, this biological code defines the “language to frame the discipline of artificial life, one which equates life with the algorithms of living systems” (Turkle). Genetic algorithms, as this paper attempts to illustrate, are merely manipulations of this language, written to simulate competitive genetic interactions, such as natural selection and mutation.  

            Swarm intelligence oversteps the intricate mechanisms governing evolution that genetic algorithms rely on. It is a field of artificial life that seeks to understand the collective behavior of animals, particularly insects, and to use this understanding for solving complex, nonlinear problems. Therefore, for swarm intelligence the “success of social insects ([that] have been colonizing a large portion of the world for several million years) can serve as a starting point for new metaphors in engineering and computer science” (Bonabeau 8). Essentially, the credibility of swarm systems lies in the fact that nature has evolved the collective behavior of insects with her own genetic algorithms for millions of years, thus making it “fit” enough to produce efficient, problem-solving strategies of interest to humans. Two important outgrowths of swarm modeling evident, for instance, in ants or bees foraging for food, are self-organization and autocatalysis. Self-organization refers to the complicated patterns that arise in the global unit—the collection of insects—as a result of simple, local interactions between individual insects, while autocatalysis delineates the swarm’s self-perpetuating tendencies. This paper aims to show that these outgrowths are advantageous in certain optimization problems because they are a result of the cooperative, social nature of swarm systems. 

 

OPTIMIZING OPTIMIZATION STRATEGIES  

Seeking Minima 

             Every optimization problem can be simplified into the basic minimum-seeking dilemma. Which path is the shortest in distance? Which decision results in the least cost? Such questions are certainly fundamental to the study of optimization. The success of genetic algorithms and swarm intelligence in dealing with optimization problems is their shared ability to dynamically locate minima through simple, local interactions of possible solutions. In other words, they are able to learn the intricacies of their environments and subsequently adapt to changes. However, they differ in that genetic algorithms are concerned with seeking the minima of so-called cost surfaces whereas swarm intelligence aims to locate minima through social sharing of information.

            As nature selects which organisms she deems fit enough to produce offspring, so genetic algorithms use a method of artificial selection to kill unfit solutions. The level of fitness is dictated by the cost surface, a mathematical or experimental function that returns the cost of a certain input of parameters. For example, in his book Practical Genetic Algorithms, Randy Haupt presents the readers with a simple cost surface that determines geographic elevation when given a certain position on a map and proposes the goal of pinpointing the position of lowest elevation. The approach of a genetic algorithm would be, first, to generate an initial, random locus of geographic points. To these points, the elevation cost function would be applied in order to distinguish which result in relatively low elevations versus relatively high elevations. Next, the algorithm would discard the relatively higher elevations and allow only the survivors to breed, thereby producing offspring solutions whose “genetic codes” are derived from combinations of the “genetic codes” of surviving solutions. This process would repeat until the optimal position is eventually located. As manifested by Haupt’s example, genetic algorithms yield competition between solutions—chromosomes—to outlast each other and competition to ensure their genetic codes persist in future generations.          

            Similar to genetic algorithms, swarm intelligence is a versatile, robust, and decentralized manner of foraging for minima. Swarm modeling, however, attempts to emulate the cooperative dynamic that colonies of insects, flocks of birds, or schools of fish exude. According to entomologist and sociobiologist E. O. Wilson (regarding fish schooling), “In theory at least, individual members of the school can profit from the discoveries and previous experience of all other members of the school during the search for food. This advantage can become decisive, outweighing the disadvantages of competition for food items, whenever the resource is unpredictably distributed in patches” (Wilson 209). The archetypal example of the ability of swarms to profit from many, individual contributions is the eminent foraging strategy of ant colonies.  As Eric Bonabeau explains in his book Swarm Intelligence, an ant colony is able to locate the closest source of food from its nest very quickly through self-organization. Foraging ants will depart from their nest in search of food supplies and return as soon as they discover one. All along their voyage, the foraging ants will deposit pheromone, a chemical substance that creates a chemical trail to the food. Assuming the amount of pheromone is directly related to the number of ants traveling a certain trail, the trail of the first returning ant will be saturated in pheromone to a greater degree than other trails; therefore, it will attract more ants. Eventually, the entire colony will migrate to this trail and exhaust the food source. Fundamental to this example of cooperative behavior is the use of positive feedback, or “recruitment” to a food source, by laying down pheromone trails (Bonabeau 9). Since they represent an individual ant’s means of socially interacting or communicating with the colony, these chemical trails are essential to maintaining the behavior and welfare of the entire ant colony. It can be argued that since genetic algorithms also function on the amplification—recruitment—of “fit” solutions, they too exhibit positive feedback. However, it is important to note that genetic algorithms amplify “good” solutions by crossbreeding “good” solutions rather than through social and behavioral rules, such as when a bee performs a dance to inform its nest of nectar.

The Prisoners’ Dilemma: Competitive Strategies

            The Prisoners’ Dilemma is the classic game theory problem where two prisoners, isolated from one other, are each given two options: admit to guilt or testify to the guilt of the other prisoner. Though, superficially, the prisoners may seem independent of one another, they are actually in competition and their fates are linked by the opposing prisoner’s claim. Similarly, populations of possible solutions for genetic algorithms, though seemingly chaotic and random, depend on the problem’s global cost function and on crossbreeding. Furthermore, like the prisoners, who are psychological struggling with one another in the hope of reducing their prison sentences, the solutions of genetic algorithms are like “individuals that can ‘mate,’ ‘mutate,’ and ‘compete’ to ‘survive’ and ‘reproduce’” (Helmreich 138). In genetic algorithms and most game theory applications, there exists a presiding interest in evolution over the existence of individuals. For such reasons, genetic algorithms parallel game theory and are much suited for the optimization of game theory problems.

            Successful applications of genetic algorithms include alignment games, such as Word Guess, and evolutionary-based disciplines, such as economics. The game Word Guess, as introduced by Randy Haupt, challenges a genetic algorithm to guess an English word, given the number of letters in the word. Since one can use various cost functions that evaluate the proximity of the algorithm’s guess to the actual word, Haupt assesses the algorithm using two, different cost functions. For the first cost function, the algorithm needed 27 iterations to correctly guess the word “Colorado,” while for the second it needed only 17 iterations. For the purposes of this paper, it is not important to dwell on why the second cost function performed better than the first. Rather, this example is used to illustrate the effectiveness of genetic algorithms in alignment games based on cost parameters. In this case, the Word Guess algorithm required only 17 to 27 guesses to correctly align the letters of a word that could potentially be aligned 278 different ways.   

            The study of economics—the study of how money and goods flow within a market—is constantly concerned with evaluating the cost of transactions. However, as the economic market becomes more robust and multifarious so do cost functions, effectively making the use of traditional, static models of economics unsuitable for modern trading needs. “The role of utility in economics is quite similar to the role of … fitness in evolutionary genetics,” declares John Holland in an attempt to illustrate that the present economy is actually a complex evolving system (Helmreich 174). The need to depart from the standard theory of economics in order create fitter, more adaptive traders has inspired the use of artificial life techniques, specifically genetic algorithms, to model the ever-transforming economy. In fact, the Sante Fe Institute for the Sciences of Complexity (SFI) created an Economics Research Program in the mid 1980s for this purpose. The program’s approach has been to view the economic market as biologists view ecology and to simulate everyday economic traffic as dynamic, genetic competition.

The Traveling Salesman Problem: Cooperative Strategies

            The complementary, classic problem for swarm intelligence is the Traveling Salesman Problem. This challenge asks for the shortest distance a salesman must travel if he is to visit N different cities. Though the problem statement appears trivial, it has survived for more than 150 years without a general solution. However, the use of swarm intelligence, specifically ant systems, has been quite successful in finding the salesman’s optimal path. Its success can be attributed to the fact that swarm intelligence excels in combinatorial optimization problems, where locating the optimal path mirrors the process of ants foraging for food. In both cases, ants are dispersed randomly in search of a “minima,” be it food or the nearest city. The ant that returns first communicates its findings to the colony in order to attract other ants and “directly reinforce [a] good solution” (Bonabeau 41). This social sharing of information, which can take many forms, is a method of positive feedback that is essential to swarm intelligence and rather useful in helping the salesman to find the best path.

            Within the realm of combinatorial optimization, swarm intelligence finds its niche in routing applications and in specialized job scheduling tasks. Not surprisingly, these two applications correlate very well with two fundamental traits of swarm intelligence: positive feedback and division of labor. As Bonabeau explains in a recent article in Nature, communications “routing is the control mechanism that directs every message in a communications network from its source node to its destination node through a sequence of intermediate nodes or switching stations” (Bonabeau). Routing, in a simplified sense, is nothing more than trafficking, as is ant colony foraging. Whereas in one case, the optimal paths are the quickest and least congested, in the other case, they are the shortest paths that lead to food sources. The key to maintaining global, self-organized behavior is the social interaction between the system’s individuals. In routing, true problems arise when portions of a network become congested and new routes must be found rapidly. However, the dynamic capability of swarm intelligence trivializes this concern.

            Besides the mechanism of positive feedback, the cooperative nature of social insect swarms has other features to offer to the study of combinatorial optimization problem solving. One such feature is division of labor. Insects efficiently allocate specialized tasks and display extreme versatility among different jobs; as a result, their collective behavior has become a model for dynamic task scheduling problems. The Job-Shop Scheduling Problem, for instance, is a challenge in assigning jobs to machine times such that no two jobs are being performed on the same machine at a given time and job completion times are minimized. Needless to say, this optimization problem is of practical relevance to industry managers whose aim is to maximize profits. Swarm intelligence has been shown to successfully solve the Job-Shop Scheduling Problem for up to 10 jobs and 15 machines (Bonabeau 71).   

           

“Artificiality connotes perceptual similarity but essential difference, resemblance from without rather than within. The artificial object imitates the real by turning the same face to the outer system … imitation is possible because distinct physical systems can be organized to exhibit nearly identical behavior” (Simon). Though Simon takes a somewhat critical stance towards artificial life by drawing a fundamental and unalterable distinction between it and natural life, he also stumbles across a pivotal aspect of many artificial life techniques: the emulation of behavior. It is this characteristic—the capability of an artificial system to “exhibit nearly identical behavior”—that Simon deems as critical in distinguishing artificial life from the natural. Ironically, this is the characteristic that genetic algorithms and swarm intelligence base their functionality on. Though it seems peculiar to discuss the behavior of behavior imitators, genetic algorithms and swarm intelligence methods, depending on the natural processes they choose to imitate, do illustrate varying behaviors, in the form of optimization problem solving strategies.


Works Cited

Bonabeau, Eric, et. al. “Inspiration For Optimization From Social Insect Behavior.”

Nature. 6 July 2000. 39-42.

Bonabeau, Eric, et. al. Swarm Intelligence: From Natural to Artificial Systems. Oxford:

Oxford University Press, 1999.

Fogel, David B. Evolutionary Computation: Toward a New Philosophy of Machine

Intelligence. New York: IEEE Press, 2000.

Franks, Nigel R. “Army Ants: A Collective Intelligence.” American Scientist. March-

April 1989. 138-145.

Haupt, Randy L. and Sue Ellen Haupt. Practical Genetic Algorithms. New York: John

Wiley & Sons, Inc., 1998.

Helmreich, Stefan. Silicon Second Nature: Culturing Artificial Life in a Digital World.

Berkeley, CA: University of California Press, 1998.

Holland, John H. Adaptation in Natural and Artificial Systems: An Introductory Analysis

            With Applications to Biology, Control, and Artificial Intelligence. Cambridge,

Massachusetts: The MIT Press, 1992.

Levy, Steven. Artificial Life. New York: Vintage Books, 1992.

Simon, Herbert A. The Sciences of the Artificial. Cambridge, Massachusetts: The MIT

Press, 1969.

Turkle, Sherry. “Artificial Life As the New Frontier.” Life on the Screen: Identity in the

Age of the Internet. New York: Simon & Schuster, 1995.

Wilson, Edward O. Sociobiology: The New Synthesis. Cambridge, Massachusetts:

Belknap Press, 1975.