Evolutionary Robotics (ER) is a methodology that uses evolutionary computation to develop controllers for autonomous robots. Algorithms in ER frequently operate on populations of candidate controllers, initially selected from some distribution. This population is then repeatedly modified according to a fitness function. In the case of genetic algorithms (or "GAs"), a common method in evolutionary computation, the population of candidate controllers is repeatedly grown according to crossover, mutation and other GA operators and then culled according to the fitness function. The candidate controllers used in ER applications may be drawn from some subset of the set of artificial neural networks, although some applications (including SAMUEL, developed at the Naval Center for Applied Research in Artificial Intelligence) use collections of "IF THEN ELSE" rules as the constituent parts of an individual controller. It is theoretically possible to use any set of symbolic formulations of a control laws (sometimes called a policies in the machine learning community) as the space of possible candidate controllers. It is worth noting that artificial neural networks can also be used for robot learning outside of the context of evolutionary robotics. In particular, other forms of reinforcement learning can be used for learning robot controllers.
Monday, May 19, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment