Horizon Project SEAL 

Horizon project SEAL (Strategic rEasoning for sociALly good  mechanisms) 

The project, funded by Horizon-Marie Skłodowska-Curie Actions, aims to design a logical framework based on Strategy Logic for formally verifying and designing mechanisms for social choice. 

Main achievements

Verification of Bayesian Mechanisms

We have proposed an approach for formally verifying Bayesian mechanisms with Probabilistic Strategy Logic (PSL). We have shown how to use it to encode classic notions from Mechanism Design, including Bayesian-Nash equilibrium and incentive compatibility. This approach allows for automatic verification of a wide class of Bayesian mechanisms and motivates further research on applications of logic-based approaches for AMD. Unlike previous proposals, the automated verification of Bayesian mechanisms using Probabilistic Strategy Logic (PSL) can take into account a wide range of settings (e.g. randomized, indirect, and Bayesian mechanisms). Furthermore, thanks to the great expressiveness of the specification language, PSL, the verification ex-ante, interim, and ex-post of complex solution concepts and properties is fully automated through model checking of logical formulas. 

Results published at SPIRIT@AIxIA 2023 and presented at the workshop AASG@AAMAS 2024.

Probabilistic Verification under Imperfect Information

We have investigated the strategy logics PATL and PATL∗ (fragments of PSL) under imperfect information.  Since the model-checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless (positional) strategies. We present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. As the main result, we show that, in stochastic MAS under II, model-checking PATL is in EXPTIME when agents play probabilistic strategies. We also show that model-checking PATL∗ is PSPACE-complete when the proponent coalition is restricted to deterministic strategies.  Under the same assumption, the model checking of PATL is on the second level of the polynomial hierarchy. We illustrate the usefulness of this setting with meaningful examples based on online approval-based elections,  probabilistic social choice, and security games. 

Results were published at KR 2023 and AAMAS 24.

Probabilistic Natural Strategies

We have proposed Behavioral Natural Strategies, a probabilistic extension of Natural Strategies, and variants of the probabilistic temporal logics PATL and PATL* to handle such strategies (the resulting logics are called NatPATL and NatPATL*, resp.). As the main result, we show that, in stochastic MAS, NatPATL model-checking is NP-complete when the active coalition is restricted to deterministic strategies. We also give a 2NEXPTIME complexity result for NatPATL* with the same restriction. In the unrestricted case, we give an EXPSPACE complexity for NatPATL and 3EXPSPACE complexity for NatPATLs. We provide examples based on secure voting. 

Results were published at AAAI 24.

Quantitative Module Checking

We have investigated the problem of quantitative module checking in Multi-Agent Systems, which formalizes the verification of (possibly multi-agent) systems that must adapt their behavior to the input they receive from the environment, also viewed as an authority. The quantitative setting enables the verification of different levels of satisfaction. We consider specifications given in Quantitative Alternating-time Temporal logics and investigate complexity and expressivity results. We illustrate the approach with an example based on a weighted voting game. 

Results were published at AAMAS 24.


Other activities