Building a Theoretical Foundation for Multi-Agent Reinforcement Learning through Reinforcement Learning and Sociological Theory

Are you passionate about solving challenging research problems with significant practical impact? Do you thrive on using theoretical approaches grounded in mathematical modeling and analysis to gain deeper insights into critical research questions and design efficient, scalable algorithms? Our research group is dedicated to building a theoretical foundation for multi-agent reinforcement learning (MARL) systems. MARL is a key subfield of machine learning, where multiple agents learn to interact within an environment to achieve specific goals. However, MARL presents several critical challenges, including imperfect state information, noisy and delayed reward signals, scalability issues, and the inability to directly observe other agents' actions.

Our Unique Approach:

Our approach to studying MARL systems is distinctive in that we view these systems through the lens of social actions. This perspective enables us to draw on theoretical concepts not only from reinforcement learning but also from economics and sociology. By incorporating ideas and concepts from sociological theory, we design MARL systems and algorithms that mirror how humans make decisions and collaborate in social groups. This approach not only leads to MARL systems that are more intuitive, adaptable, and efficient but also provides a deeper understanding of the social processes underlying human collaboration and coordination in social groups.

Research Mission:

Our mission is to develop a theoretical foundation for MARL systems and use this foundation to understand how agents can better collaborate and share information to optimize collective outcomes. This research not only advances the state of the art but also paves the way for more scalable, efficient, and adaptable multi-agent systems. In our research, we address fundamental theoretical questions, including:
  • How should agents interact to maximize collective rewards?
  • What information should be shared, and how should it be used?
  • How can MARL systems be made more intuitive, efficient, and adaptable?
For this analysis, we combine results and techniques from game theory, stochastic processes, optimization theory, opinion dynamics, and social network formation.

Impact:

Our research has the potential to fundamentally transform the understanding and design of MARL systems. First, the theoretical foundation we develop will enable the systematic design of learning algorithms—a foundation that currently exists for single-agent reinforcement learning (RL) systems through Markov decision processes (MDPs) but is lacking for MARL systems. Second, by studying MARL systems through the lens of social actions, we aim to develop novel algorithms that mirror how humans interact and make decisions in social groups. This approach will help us create MARL systems that are:
  • More intuitive to understand and implement,
  • More efficient, achieving higher performance, and
  • More adaptable to dynamic and complex environments.
If you’re interested in contributing to this cutting-edge research and exploring the intricate dynamics of multi-agent systems through a mathematical lens, we invite you to join our team. Together, we can push the boundaries of what MARL systems can achieve.