Stimulated by various emerging applications involving agents to solve complex problems in
real-world domains, such as intelligent sensing systems for the Internet of the Things (IoT),
automated configurators for critical infrastructure networks, and intelligent resource allocation for
social domains (e.g., security games for the deployment of security resources or
auctions/procurements for allocating goods and services), agents in these domains commonly
leverage different forms optimization and/or learning to solve complex problems.
The goal of the workshop is to provide researchers with a venue to discuss models or techniques for tackling a variety of multi-agent optimization problems. We seek contributions in the general area of multi-agent optimization, including distributed optimization, coalition formation, optimization under uncertainty, winner determination algorithms in auctions and procurements, and algorithms to compute Nash and other equilibria in games. Of particular emphasis are contributions at the intersection of optimization and learning. See below for a (non-exhaustive) list of topics.
This workshop invites works from different strands of the multi-agent systems community that pertain to the design of algorithms, models, and techniques to deal with multi-agent optimization and learning problems or problems that can be effectively solved by adopting a multi-agent framework.
TopicsThe workshop organizers invite paper submissions on the following (and related) topics:
- Optimization for learning (strategic and non-strategic) agents
- Learning for multi-agent optimization problems
- Distributed constraint satisfaction and optimization
- Winner determination algorithms in auctions and procurements
- Coalition or group formation algorithms
- Algorithms to compute Nash and other equilibria in games
- Optimization under uncertainty
- Optimization with incomplete or dynamic input data
- Algorithms for real-time applications
- Cloud, distributed and grid computing
- Applications of learning and optimization in societally beneficial domains
- Multi-agent planning
- Multi-robot coordination
The workshop is of interest both to researchers investigating applications of multi-agent systems to optimization problems in large, complex domains, as well as to those examining optimization and learning problems that arise in systems comprised of many autonomous agents. In so doing, this workshop aims to provide a forum for researchers to discuss common issues that arise in solving optimization and learning problems in different areas, to introduce new application domains for multi-agent optimization techniques, and to elaborate common benchmarks to test solutions.
Finally, the workshop will welcome papers that describe the release of benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.
The workshop will be a one-day meeting. It will include a number of technical sessions, a virtual poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of multiagent optimization and learning.
AttendanceAttendance is open to all. At least one author of each accepted submission must be present at the workshop.
- March 19, 2022 (23:59 UTC-12) – Submission Deadline [Extended]
- April 23, 2022 (23:59 UTC-12) – Acceptance notification
- April 23, 2022 (23:59 UTC-12) – AAMAS/IJCAI Fast Track Submission Deadline
- April 28, 2022 (23:59 UTC-12) – AAMAS/IJCAI Fast Track Acceptance Notification
- May 3, 2022 (23:59 UTC-12) – Poster and Presentations due
- May 10, 2022 – Workshop Date [Auckland time (UTC+12)]
Submission URL: https://easychair.org/conferences/?conf=optlearnmas22
- Technical Papers: Full-length research papers of up to 8 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
- Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.
All papers must be submitted in PDF format, using the AAMAS-22 author kit.
Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the AAMAS 2022 and IJCAI 2022 technical program are welcomed.
Fast Track (Rejected AAMAS or IJCAI papers)
Rejected AAMAS or IJCAI papers with *average* scores of at least 5.0 may be submitted
to OptLearnMAS along with previous reviews and scores and an optional letter indicating how the
authors have addressed the reviewers comments.
Please use the submission link above and indicate that the submission is a resubmission from of an AAMAS/IJCAI rejected paper. Also OptLearnMAS submission, reviews and optional letter need to be compiled into a single pdf file.
These submissions will not undergo the regular review process, but a light one, performed by the chairs, and will be accepted if the previous reviews are judged to meet the workshop standard.
Per the AAMAS Workshop organizers:
There will be a Springer issue for best workshop papers and visionary papers, so each workshop should nominate two papers, one for each special issue. The authors should be aware that if the nominated workshop paper is also an AAMAS paper (or some other conference paper), the version in the Springer books should have additional material (at least 30% more).
For questions about the submission process, contact the workshop chairs.
|Time (ET) UTC-4||Time (NZ) UTC+12||Talk / Presenter|
|12:10||4:10||Invited Talk by Sven Koenig: "Multi-Agent Path Finding and Its Applications"|
|Session 1: Optimization and Learning -- Session chair: Ferdinando Fioretto|
|13:00||5:00||Contributed Talk: Distributed Observation Allocation for a Large-Scale Constellation|
|13:15||5:15||Contributed Talk: Multi-fidelity Optimization for Pedstrian Route Guidance|
|13:30||5:30||Contributed Talk: TOPS: transition-based volatility-reduced policy search|
|13:45||5:45||Contributed Talk: Learning (Local) Surrogate Loss Functions for Predict-Then-Optimize Problems|
|14:00||6:00||Contributed Talk: Learning General Inventory Management Policy for Large Supply Chain Network|
|14:30||6:30||Invited Talk by Nir Shlezinger: "Model-Based Deep Learning in Signal Processing and Communications "|
|Session 2: Optimization and Learning -- Session chair: Jiaoyang Li|
|15:20||7:20||Contributed Talk: Impact of Simple Algorithmic Filtering Strategies on Polarization in Social Networks due to Filter Bubbles: Preliminary Results|
|15:35||7:35||Contributed Talk: Towards Group Learning: Distributed Weighting of Experts|
|15:50||7:50||Contributed Talk: Learning to Play Adaptive Cyber Deception Game|
|16:05||8:05||Contributed Talk: Risk-Sensitive Bayesian Games for Multi-Agent Reinforcement Learning under Policy Uncertainty|
|16:20||8:20||Contributed Talk: Dynamic graph reduction optimization technique for interdiction games|
|16:35||8:35||Contributed Talk: Safe Delivery of Critical Services in Areas with Volatile Security Situation via a Stackelberg Game Approach|
|16:50||8:50||End of Workshop|
- Distributed Observation Allocation for a Large-Scale Constellation
Shreya Parjan, Steve Chien and Ryan Harrod
- Multi-fidelity Optimization for Pedstrian Route Guidance
Yusaku Kato, Shusuke Shigenaka and Masaki Onishi
- TOPS: transition-based volatility-reduced policy search
Liangliang Xu, Daoming Lyu, Yangchen Pan, Aiwen Jiang and Bo Liu
- Learning (Local) Surrogate Loss Functions for Predict-Then-Optimize Problems
Sanket Shah, Bryan Wilder, Andrew Perrault and Milind Tambe
- Learning General Inventory Management Policy for Large Supply Chain Network
Soh Kumabe, Shinya Shiroshita, Takanori Hayashi and Shirou Maruyama
- Impact of Simple Algorithmic Filtering Strategies on Polarization in Social Networks due to Filter Bubbles: Preliminary Results
Jean Springsteen, William Yeoh and Yevgeniy Vorobeychik
- Towards Group Learning: Distributed Weighting of Experts
Benjamin Abramowitz and Nicholas Mattei
- Learning to Play Adaptive Cyber Deception Game
Yinuo Du, Zimeng Song, Stephanie Milani, Cleotilde Gonzalez and Fei Fang
- Risk-Sensitive Bayesian Games for Multi-Agent Reinforcement Learning under Policy Uncertainty
Hannes Eriksson, Debabrota Basu, Mina Alibeigi and Christos Dimitrakakis
- Dynamic graph reduction optimization technique for interdiction games
Jim Blythe and Alexey Tregubov
- Safe Delivery of Critical Services in Areas with Volatile Security Situation via a Stackelberg Game Approach
Tien Mai and Arunesh Sinha
Best Paper AwardTOPS: transition-based volatility-reduced policy search
Liangliang Xu, Daoming Lyu, Yangchen Pan, Aiwen Jiang and Bo Liu
Multi-Agent Path Finding and Its Applications (TBA)by Sven Koenig (University of Southern California)
Abstract: The coordination of robots and other agents becomes more and more important for industry. For example, on the order of one thousand robots already navigate autonomously in Amazon fulfillment centers to move inventory pods all the way from their storage locations to the picking stations that need the products they store (and vice versa). Optimal and, in some cases, even approximately optimal path planning for these robots is NP-hard, yet one must find high-quality collision-free paths for them in real-time. Algorithms for such multi-agent path-finding problems have been studied in robotics and theoretical computer science for a longer time but are insufficient since they are either fast but of insufficient solution quality or of good solution quality but too slow. In this talk, I will discuss different variants of multi-agent path-finding problems, cool ideas for both solving them and executing the resulting plans robustly, and several of their applications, including warehousing. Our research on this topic has been funded by both NSF and Amazon Robotics.
Model-Based Deep Learning in Signal Processing and Communications (TBA)by Nir Shlezinger (Ben-Gurion University)
Abstract: Recent years have witnessed a dramatically growing interest in machine learning (ML) methods. These data-driven trainable structures have demonstrated an unprecedented empirical success in various applications, including computer vision and speech processing. The benefits of ML-driven techniques over traditional model-based approaches are twofold: First, ML methods are independent of the underlying stochastic model, and thus can operate efficiently in scenarios where this model is unknown, or its parameters cannot be accurately estimated; Second, when the underlying model is extremely complex, ML algorithms have demonstrated the ability to extract and disentangle the meaningful semantic information from the observed data. Nonetheless, not every problem can and should be solved using deep neural networks (DNNs). In fact, in scenarios for which model-based algorithms exist and are computationally feasible, these analytical methods are typically preferable over ML schemes due to their theoretical performance guarantees and possible proven optimality. Notable application areas where model-based schemes are typically preferable, and whose characteristics are fundamentally different from conventional deep learning applications, include signal processing and digital communications. In this talk, I will present methods for combining DNNs with traditional model-based algorithms. We will show how hybrid model-based/data-driven implementations arise from classical methods in communications and signal processing in general, and show how fundamental classic techniques can be implemented without knowledge of the underlying statistical model, while achieving improved robustness to uncertainty.