Abstract
The problem-oriented design of collective intelligence systems (CIS) is in itself an open problem. Previous research draws upon findings from bio-logical swarm intelligence to derive guiding design principles but also high-lights the importance of evaluating the system's state with respect to the given problem. We investigate this evaluation task on the individual and the global level within the framework inspired by reinforcement learning. We map differ-ent modes of evaluation to different schemes of rewarding agents, thereby illus-trating that designer of CIS face the task of reward shaping. We simulate sever-al reward schemes as variations of the well-known ant colony system (ACS). We show that rewards in the ACS, although they consist only of a single value, the metaphorical pheromone concentration, have complex semantics, and coor-dinate the distribution of information and allocation of work within the system. This makes the ACS a valuable source of inspiration for CIS with human agents.
Recommended Citation
Kornrumpf, Alexander and Baumöl, Ulrike, "Reward Shaping in the Ant Colony System: Lessons for the Design of Collective Intelligence Systems" (2015). Wirtschaftsinformatik Proceedings 2015. 57.
https://aisel.aisnet.org/wi2015/57