Overview

Multi-agent systems (MAS) and Self-Adaptive Systems (SAS) are used across robotics, resource management, and autonomous computing, yet understanding why agents make particular decisions remains an open challenge. When multiple agents interact through shared environments, the resulting behaviour is difficult to trace, attribute, and explain. DOAgent is a Python library that addresses this gap by treating shared data as the primary interface between agents, automatically recording decisions, state transitions, and contributions so that agent behaviour can be analysed after execution. This project proposes to reproduce an existing multi-agent or self-adaptive system from the literature using DOAgent, and then explore interpretability approaches on the recorded agent interactions.

FAQs

  • What will I learn in this Project?

    You will learn about multi-agent systems, data-oriented architectures, and interpretability techniques for understanding agent behaviour. You will gain practical experience building and analysing multi-agent systems using the DOAgent library.

  • What is the objective of the project?

    You will select and reproduce a multi-agent or self-adaptive system from the literature (e.g., a cooperative exploration task, a MAPE-K feedback loop, or a competitive resource-allocation scenario) using the DOAgent library. The first step is implementing the chosen system within DOAgent’s session and environment abstractions, ensuring that agent interactions are fully recorded. You will then apply and compare DOAgent’s built-in analysis tools (provenance, traceability, accountability, and interpretability) to understand the emergent behaviour. Finally, you will explore extensions such as post-hoc explanation methods, visualisation of decision chains, or scaling analysis to larger agent populations.

  • How does this fit into the bigger picture?

    This project is part of the Self-Sustaining Software Systems (S4) research agenda and contributes directly to the development of the DOAgent library. The broader goal is to enable accountable and interpretable multi-agent systems where every decision is traceable to its inputs, tools, and agents.

Related Group Projects