As a large and ever-increasing part of our economic and social interactions move to the cyberspace, data-driven algorithmic decision making by autonomous agents is fast becoming an integral and inseparable part of our lives. These agents are competing in uncertain and volatile environments and must in turn learn aspects thereof, and of each other, in order to dynamically optimize their performance. What’s more, even the humans in the loop are obliged to depend more and more on data-driven signals for their own decision making, e.g., on automated rankings and recommendations.
Given the inherently distributed, strategic, dynamic nature of this ethos, learning in dynamic games, with its broad spectrum of modeling and analysis tools, is a prime candidate for providing this endeavour the theoretical underpinnings, with a balance between unification of the mathematical substructure while retaining the distinct flavors and diversity of the competing paradigms. On modeling front, this ranges from dynamic cooperative games to mean field and evolutionary games, and for learning paradigms, from reinforcement learning to learning by imitation.