Horizon Project SEAL
Horizon project SEAL (Strategic rEasoning for sociALly good mechanisms)
The project, funded by Horizon-Marie Skłodowska-Curie Actions, aims to design a logical framework based on Strategy Logic for formally verifying and designing mechanisms for social choice.
Leading researcher: Munyque Mittelmann
Project coordinator: Aniello Murano
Starting date: August 2023
Duration: 24 months
Project webpage at CORDIS
Main achievements
Verification of Bayesian Mechanisms
We have proposed an approach for formally verifying Bayesian mechanisms with Probabilistic Strategy Logic (PSL). We have shown how to use it to encode classic notions from Mechanism Design, including Bayesian-Nash equilibrium and incentive compatibility. This approach allows for automatic verification of a wide class of Bayesian mechanisms and motivates further research on applications of logic-based approaches for AMD. Unlike previous proposals, the automated verification of Bayesian mechanisms using Probabilistic Strategy Logic (PSL) can take into account a wide range of settings (e.g. randomized, indirect, and Bayesian mechanisms). Furthermore, thanks to the great expressiveness of the specification language, PSL, the verification ex-ante, interim, and ex-post of complex solution concepts and properties is fully automated through model checking of logical formulas.
Results published at SPIRIT@AIxIA 2023 and presented at the workshop AASG@AAMAS 2024.
Probabilistic Verification under Imperfect Information
We have investigated the strategy logics PATL and PATL∗ (fragments of PSL) under imperfect information. Since the model-checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless (positional) strategies. We present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. As the main result, we show that, in stochastic MAS under II, model-checking PATL is in EXPTIME when agents play probabilistic strategies. We also show that model-checking PATL∗ is PSPACE-complete when the proponent coalition is restricted to deterministic strategies. Under the same assumption, the model checking of PATL is on the second level of the polynomial hierarchy. We illustrate the usefulness of this setting with meaningful examples based on online approval-based elections, probabilistic social choice, and security games.
Probabilistic Natural Strategies
We have proposed Behavioral Natural Strategies, a probabilistic extension of Natural Strategies, and variants of the probabilistic temporal logics PATL and PATL* to handle such strategies (the resulting logics are called NatPATL and NatPATL*, resp.). As the main result, we show that, in stochastic MAS, NatPATL model-checking is NP-complete when the active coalition is restricted to deterministic strategies. We also give a 2NEXPTIME complexity result for NatPATL* with the same restriction. In the unrestricted case, we give an EXPSPACE complexity for NatPATL and 3EXPSPACE complexity for NatPATLs. We provide examples based on secure voting.
Results were published at AAAI 24.
Quantitative Module Checking
We have investigated the problem of quantitative module checking in Multi-Agent Systems, which formalizes the verification of (possibly multi-agent) systems that must adapt their behavior to the input they receive from the environment, also viewed as an authority. The quantitative setting enables the verification of different levels of satisfaction. We consider specifications given in Quantitative Alternating-time Temporal logics and investigate complexity and expressivity results. We illustrate the approach with an example based on a weighted voting game.
Results were published at AAMAS 24.
Publications
★ D. Hyland, M. Mittelmann, A. Murano, G. Perelli, M. Wooldridge. "Incentive Design for Rational Agents". To appear at KR 2024.
★ Y. He, M. Mittelmann, A. Murano, A. Saffidine, M. Thielscher. "Verification of General Games with Imperfect Information using Strategy Logic". To appear at KR 2024.
★ W. Jamroga, M. Mittelmann, A. Murano, G. Perelli. “Playing Quantitative Games Against an Authority: On the Module Checking Problem”. AAMAS 2024.
★ F. Belardinelli, W. Jamroga, M. Mittelmann, A. Murano. “Verification of Stochastic Multi-Agent Systems with Forgetful Strategies”. AAMAS 2024.
★ R. Berthon, J.-P. Katoen, M. Mittelmann, A. Murano. “Natural Strategic Ability in Stochastic Multi-Agent Systems”. AAAI 2024.
F. Belardinelli, W. Jamroga, M. Mittelmann, A. Murano. “Strategic Abilities of Forgetful Agents in Stochastic Environments” (short paper). KR 2023. ArXiv.
M. Mittelmann, B. Maubert, A. Murano, L. Perrussel. "Verification of Bayesian Mechanisms with Strategy Logic" (extended abstract). SPIRIT@AIxIA 2023.
Other activities
Organization of the workshop SPIRIT @ AIxIA 2024.
Organization of the workshop LAMAS&SR 2024.
Course "Formal Aspects of Strategic Reasoning and Game Playing " accepted @ ESSAI & ACAI 2024.
Tutorial "Formal Aspects of Strategic Reasoning in Multi-Agent Systems" @ KR 24.
Tutorial "Strategic Reasoning in Automated Mechanism Design" @ KR 2023.
Presentation "Formal Verification for Fostering the Design of Mechanisms for Social Good" at the workshop AASG@AAMAS 24.
Presentation "Playing Quantitative Games Against an Authority: On the Module Checking Problem" at the workshop AAPEI@ECAI 24.
Presentation "Natural Strategic Ability in Stochastic Multi-Agent Systems" at the workshop ActSynt @ECAI 24.