Comparing multimodal optimization and illumination
(2017) : Vassiliades, Vassilis Chatzily...
DOI: https://doi.org/10.1145/3067695.3075610
#behavioural_diversity #illumination_algorithm #novelty_search #MAP_Elites #quality_diversity #my_bibtex
Comparing multimodal optimization and illumination | Proceedings of the Genetic and Evolutionary Computation Conference Companion

ACM Conferences
Searching for quality diversity when diversity is unaligned with quality
(2016) : Pugh, Justin K Soros, Lisa B S...
DOI: https://doi.org/10.1007/978-3-319-45823-6_82
#behavioural_diversity #quality_diversity #novelty_search #my_bibtex
Searching for quality diversity when diversity is unaligned with quality
(2016) : Pugh, Justin K and Soros, Lisa B and Stanley, Kenneth O
DOI: https://doi.org/10.1007/978-3-319-45823-6_82
#behavioural_diversity #novelty_search #quality_diversity
#my_bibtex
Quality Diversity: a New Frontier for Evolutionary Computation
(2016) : Pugh, Justin K and Soros, Lisa B and Stanley, Kenneth O
DOI: https://doi.org/10.3389/frobt.2016.00040
#behavioural_diversity #evolutionary_algorithms #novelty_search #quality_div
#my_bibtex
Evolving a Behavioral Repertoire for a Walking Robot

Abstract. Numerous algorithms have been proposed to allow legged robots to learn to walk. However, most of these algorithms are devised to learn walking in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which combines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of controllers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution introduced a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.

MIT Press