Articles Archive
Articles Search
Director Wiki
 

Teaching Old Turtles New Tricks: Artificial Life Simulation Using Lingo, Part 2

May 30, 2001
by Andrew M. Phelps & Daniel R. Kunkle

[Editor's Note: In Part 1 of this article, the authors introduced a simulation that involved a series of autonomous agents (turtles) moving over a changing landscape looking for food. In this conclusion, Andy Phelps and Dan Kunkle show how genetic algorithms can be employed to improve the "intelligence" of the turtles. To watch what's going on, look at the Shockwave demo.]

Making it Lifelike

Overview of Artificial Life

Artificial life, most broadly, is life created by human effort rather than natural occurrence. This covers a massive range of study and must be narrowed in this case to the aspects of artificial life that this project addresses. Resources for artificial life in general can be found in the annotated bibliography.

In our case, artificial life is the simulation of life using computational methods. This can have two obvious uses: simulating existing biological systems to gain a better understanding of them and simulating new systems that have no natural analog. The first use is becoming more and more widespread and there are a number of tools available that allow this kind of explanatory simulation, including StarLogo developed at the Media Laboratory, MIT. StarLogo is also used as an educational tool to teach students about the modeling of decentralized systems [MIT, 2001].

Our simulation has no natural analog. It is not meant to shed light on some existing system, but rather to create a novel environment that has some of the features of a natural system. This approach is becoming increasingly popular in computer gaming where the possibility of a game that conforms to its players and evolves over time is particularly appealing.

This project simulates two main features of living things, a decision making process to choose possible actions based on current conditions, and evolution to adapt to an environment. The implementation of these features is covered in the following sections.

Decision-Making Procedure

Decision making is a trademark of life, and a complex one at that. What a given life form does at any moment is based on a great number of things, and these things can be separated into two categories: internal and external. The state of the individual and the state of the environment as they perceive it determines what they will do next. This procedure is modeled in the simulation in a simplified form.

Because the traditional turtle from logo can do little else than take movement instructions, these upgraded individuals are referred to as Decision Making Turtles (DMTs), and are the creatures you will find roaming the simulation.

At any given moment the DMT will find itself in one, and only one condition. This condition is based on properties of both the DMT itself and it's immediate surroundings. In the basic simulation there are only two properties, one internal and one external: (1) The DMT's energy level, and (2) whether or not the patch the DMT is currently on has food or not.

These properties are defined as binary digits, each having only two possible values. For the first, the DMT either has high or low energy (high and low being defined at the start of the simulation). The second, food is present or not present. This leads to four possible conditions, (1) low energy/no food, (2) low energy/food present, (3) high energy/no food and (4) high energy/food present. Replaced by binary digits these conditions are 00, 01, 10 and 11 (or 0, 1, 2 and 3 in decimal). So, a DMT always finds itself in one of those four conditions. This covers one half of the decision making process, a DMT must now decide what actions to perform for the given condition.

Along with the list of conditions, a list of actions that a DMT has available to it is specified. In this simulation that list includes the following:

  1. nothing - DMT performs no action
  2. speed up - DMT speeds up one speed unit
  3. slow down - DMT slows down one speed unit
  4. stop - DMT sets speed to zero
  5. head for food - DMT heads for the neighboring patch with the most food. Each patch has 8 neighboring patches. If the patch that the DMT is on has more food than any of those 8 the DMT does nothing.
  6. turn around - DMT turns 180 degrees
  7. turn random amount - DMT turns a random amount, from 1 to 360 degrees.
  8. Head for neighboring turtles - DMT heads for a the neighboring patch with the most DMTs on it. If the patch that the DMT is currently on has more turtles than any of the surrounding 8 patches the DMT will do nothing.
  9. head away from neighboring turtles - DMT heads directly away from the neighboring patch with the most DMTs on it. If the patch that the DMT is currently on has more turtles than any of the surrounding 8 patches the DMT will do nothing.

Notice that "move" is not an action. This is because each DMT will move each frame. If the DMT's speed is zero then moving will not change that DMT's location.

The only thing left is to connect the conditions to the actions. Each DMT is given a "genome", which is a list that specifies the actions to be performed in each condition. In this simulation, a DMT can perform up to 2 actions in any given condition. At the beginning, each DMT's genome is randomly generated, having either 1 or 2 random actions per condition.

An example random genome: [3,4] [7,3] [5] [4]

The first set of []'s corresponds to the first condition listed above, low energy/no food. So, when a DMT has low energy and is not on a patch with food it will first slow down one speed unit, then stop. It is possible for actions to cancel each other out when performed in the same condition, such as speed up and slow down.

These conditions and actions could easily be modified and augmented to provide the DMTs with goals and considerations other than simply finding food to stay alive. In one version of this simulation there were a number of red patches scattered around that were designated as "goals" that DMTs would consider in their decision making process.

A DMT's genome will not change during their lifetime, so DMTs with a poor genome will die earlier than those with a more advantageous genome. The population of DMTs, however, can improve through evolution.

Evolution

The accepted method of evolution in nature is through natural selection, as formulated most famously by Charles Darwin. The theory of natural selection is based on a few assumptions and observations:

  1. The environment provides limited resources that individuals must compete for;
  2. Individuals have differences;
  3. Individuals can pass on their differences to offspring.

This leads to the inference that some individuals, because of their differences, will be more able to acquire those limited resources and therefor survive more easily and reproduce more easily. These offspring in turn will acquire some traits from their parents that will allow them to survive more easily, and so on. Over time, evolution.

This is just the scenario that has been created in this simulation. The environment has limited food resources. Individuals differ through their mapping of conditions to actions. And, through their genome, better performing individuals can pass their traits onto their offspring.

Complex Behavior from Simple Brains

The use of evolution in this simulation resulted in emergent behaviors. Emergent behaviors are complex ones that result from the integration of some simpler actions.

In this case, we programmed the DMTs with a small set of simple actions, but didn't explicitly define how they are to be used. The optimum use of these actions was determined by the DMTs evolution in their environment.

One fairly important emergent behavior, as far as survival is concerned, is grazing. In the first generation, the DMTs often run wildly, wasting energy, or stand still, failing to find food sources. In later generations many of them seem to graze the environment, moving to patches that have large amounts of food and moving on once those patches are depleted. This behavior was in no way defined in the simulation but yet appears consistently with several runs.

Another emergent behavior observed was rebounding, or turning around upon reaching the edge of the patches. This behavior was explicitly programmed into the system, as it is in any simulation where characters can move about in a finite area. In this case though the rebounding doesn't always work. DMTs, if they are stubborn enough, can continue to move off the edge of the patches. The rebounding programmed into the simulation could most likely have been improved to insure no DMT could go off the board, but evolution provided for it after a number of generations. This is because DMTs that leave the board have no way of obtaining food and so usually die-off sooner. Through evolution these individuals reproduce less often and DMTs stay within the area of the patches more regularly. This was tested by removing the built in rebounding and observing the results. To experiment with this yourself, make sure that the global variable gReboundingOn in startMovie is set to false, and let the program cycle through 10-15 generations. You should see a marked decrease in the number of turtles that fly off the board over time.

Genetic Algorithm

Genome Representations

The first step in implementing a genetic algorithm (GA) is representing the thing you want to improve with a genome of some sort. In this case, since it is the decision-making effectiveness of the individuals that we want to improve, the genome used in the genetic algorithm is the same as that used for decision-making. That is, a list of actions for each condition.

Genetic Algorithm Representations

The procedures described here are based on Goldberg's explanation in his first chapter an "Introduction to Genetic Algorithms" [Goldberg 1989]. Much work has been done on GAs in the last few decades along with a number of good introductions for those new to the subject.

A genetic algorithm selects those DMTs that have performed better (i.e. ones that have lived longer) and mates them, producing a new generation of DMTs. This new population is produced by halting the simulation momentarily and performing the following steps:

  1. Calculate each DMT's fitness - DMTs that have lived longer by feeding more will have a higher fitness. In more complex simulations fitness would be based on a number of criteria.
  2. Create a mating pool - This mating pool is half the size of the original population. Each member of the mating pool is selected randomly from the current population, with DMTs that have a higher fitness being more likely to be selected.
  3. Reproduction through crossover - Each member of the mating pool is mated with one other, so that each DMT that made it to the mating pool reproduces with one and only one other DMT from the mating pool. Their decision making genome are split at a random point, crossed over, and produce two offspring (see Figure 4).
  4. Mutation - each child has a small random chance that it's genome will be slightly altered through mutation. This helps to encourage diversity in the population, hopefully leading to better genome that might not otherwise have been discovered. It is the only way in this simulation to achieve new combinations of actions within a condition, as crossover only acts on combinations of sets of actions.
  5. Restart the simulation and repeat steps 1 - 4 at set intervals(about one generation every 1 minute, depending on the speed of your machine).

Parent1 = [1,2] [6] [9,4] [5]
Parent2 = [4,4] [3] [7,2] [9,8]
random crossover point = 3 (between third and fourth conditions)
offspring1 = [1,2] [6] [9,4] [9,8]
offspring2 = [4,4] [3] [7,2] [5]
Both parents and both offspring are then placed into the next population.

Figure 4: a simple example of cross over.

Andrew (Andy!) is a professor at the Rochester Institute of Technology (RIT) serving in the Dept. of Information Technology, specializing in Multimedia and Web Programming. He is developing a game programming curriculum, with an emphasis on Lingo based solutions as well as more traditional approaches. Visit his home at andysgi.rit.edu. Dan Kunkle is an undergraduate in the Information Technology program at RIT. Artificial Life and Genetics are his passion. He hopes to pursue this line of study in graduate school as well. Andrew Phelps says that he's a fabulously smart kid.

Copyright 1997-2017, Director Online. Article content copyright by respective authors.