Mech-Elites: Illuminating the Mechanic Space of GVGAI

This paper introduces a fully automatic method of mechanic illumination for general video game level generation. Using the Constrained MAP-Elites algorithm and the GVG-AI framework, this system generates the simplest tile based levels that contain specific sets of game mechanics and also satisfy playability constraints. We apply this method to illuminate mechanic space for $4$ different games in GVG-AI: Zelda, Solarfox, Plants, and RealPortals.

arXiv.org
An illumination algorithm approach to solving the micro-depot routing problem
(2019) : Urquhart, Neil Höhl, Silke Har...
DOI: https://doi.org/10.1145/3321707.3321767
#search #quality_diversity #routing #my_bibtex
An illumination algorithm approach to solving the micro-depot routing problem | Proceedings of the Genetic and Evolutionary Computation Conference

ACM Conferences
Comparing multimodal optimization and illumination
(2017) : Vassiliades, Vassilis Chatzily...
DOI: https://doi.org/10.1145/3067695.3075610
#behavioural_diversity #illumination_algorithm #novelty_search #MAP_Elites #quality_diversity #my_bibtex
Comparing multimodal optimization and illumination | Proceedings of the Genetic and Evolutionary Computation Conference Companion

ACM Conferences
Searching for quality diversity when diversity is unaligned with quality
(2016) : Pugh, Justin K Soros, Lisa B S...
DOI: https://doi.org/10.1007/978-3-319-45823-6_82
#behavioural_diversity #quality_diversity #novelty_search #my_bibtex
Generating and Adapting To Diverse Ad-Hoc Cooperation Agents in Hanabi
(2020) : Canaan, Rodrigo et al
url: https://arxiv.org/abs/2004.13710
#meta_strategy #quality_diversity #hanabi #ad_hoc_cooperation #agents #my_bibtex
Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi

Hanabi is a cooperative game that brings the problem of modeling other players to the forefront. In this game, coordinated groups of players can leverage pre-established conventions to great effect, but playing in an ad-hoc setting requires agents to adapt to its partner's strategies with no previous coordination. Evaluating an agent in this setting requires a diverse population of potential partners, but so far, the behavioral diversity of agents has not been considered in a systematic way. This paper proposes Quality Diversity algorithms as a promising class of algorithms to generate diverse populations for this purpose, and generates a population of diverse Hanabi agents using MAP-Elites. We also postulate that agents can benefit from a diverse population during training and implement a simple "meta-strategy" for adapting to an agent's perceived behavioral niche. We show this meta-strategy can work better than generalist strategies even outside the population it was trained with if its partner's behavioral niche can be correctly inferred, but in practice a partner's behavior depends and interferes with the meta-agent's own behavior, suggesting an avenue for future research in characterizing another agent's behavior during gameplay.

arXiv.org
Discovering Representations for Black-box Optimization
(2020) : Gaier, Adam Asteroth, Alexande...
DOI: https://doi.org/10.1145/3377930.3390221
#representation #black_box #discovery #quality_diversity #MAP_Elites #optimisation #my_bibtex
Discovering representations for black-box optimization | Proceedings of the 2020 Genetic and Evolutionary Computation Conference

ACM Conferences
Open-ended evolution with multi-containers QD
(2018) : Doncieux, Stephane and Coninx, Alexandre
DOI: https://doi.org/10.1145/3205651.3205705
#evolutionary_algorithms #quality_diversity
#my_bibtex
Open-ended evolution with multi-containers QD | Proceedings of the Genetic and Evolutionary Computation Conference Companion

ACM Conferences
Searching for quality diversity when diversity is unaligned with quality
(2016) : Pugh, Justin K and Soros, Lisa B and Stanley, Kenneth O
DOI: https://doi.org/10.1007/978-3-319-45823-6_82
#behavioural_diversity #novelty_search #quality_diversity
#my_bibtex
Go-Explore: a New Approach for Hard-Exploration Problems
(2019) : Ecoffet, Adrien and Huizinga, Joost and Lehman, Joel and Stanley, Kenneth O and Clune, Jeff
url: https://arxiv.org/abs/1901.10995
#Go_Explore #machine_learning #quality_diversity #rein
#my_bibtex
Go-Explore: a New Approach for Hard-Exploration Problems

A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).

arXiv.org