Available Masters/Part III Projects

Improving Probabilistic Models for Machine Learning in Science

Each of the six following projects involves understanding and extending an existing probabilistic model commonly used in a scientific context to improve usability and model understanding. Please email me (ar847@cam.ac.uk) if interested.

Interpretable Machine Learning for Intensive Care Decision Support

Intensive care units (ICUs) generate vast volumes of patient data, yet clinicians often lack tools to translate this data into timely, trustworthy decisions. The aICU project, which aims to support safe, interpretable, and clinically meaningful decision-making by establishing a standardised pipeline for developing, evaluating, and deploying AI in critical care. This project proposes to reproduce existing machine learning models that address specific ICU problems (e.g., mortality prediction, sepsis detection, ventilator weaning, or length-of-stay estimation) and then investigate interpretability methods to make the model predictions understandable to clinicians.

Interpretable Multi-Agent Systems with DOAgent

Multi-agent systems (MAS) and Self-Adaptive Systems (SAS) are used across robotics, resource management, and autonomous computing, yet understanding why agents make particular decisions remains an open challenge. When multiple agents interact through shared environments, the resulting behaviour is difficult to trace, attribute, and explain. DOAgent is a Python library that addresses this gap by treating shared data as the primary interface between agents, automatically recording decisions, state transitions, and contributions so that agent behaviour can be analysed after execution. This project proposes to reproduce an existing multi-agent or self-adaptive system from the literature using DOAgent, and then explore interpretability approaches on the recorded agent interactions.

Machine Learning (Bayesian Methodology, Inference and Applications)

Students interested to work with me should come up with a grain of an idea before reaching out. If there is a match I would be happy to discuss to flesh out the details and create a project out of it. I always believed that part of doing a project is coming up with ideas and angles ripe for exploration.

I am broadly interested in probabilistic machine learning and applications in climate science.

Unconventional AI and explainable AI

These twelve projects in unconvential and explainable AI would be supervised by Soumya Banerjee.

Available Undergrad Projects

5asideCHESS Engine and Tablebase

The idea of this project is to build an Engine and Tablebase for a Cambridge-based smaller variant of the classic game. This project would be carried out in contact with Ross Smith from 5asideCHESS, an organisation focused on improving social connections. Offered to motivated students passionate about machine learning and chess.

Interpretable Machine Learning for Intensive Care Decision Support

Intensive care units (ICUs) generate vast volumes of patient data, yet clinicians often lack tools to translate this data into timely, trustworthy decisions. The aICU project, which aims to support safe, interpretable, and clinically meaningful decision-making by establishing a standardised pipeline for developing, evaluating, and deploying AI in critical care. This project proposes to reproduce existing machine learning models that address specific ICU problems (e.g., mortality prediction, sepsis detection, ventilator weaning, or length-of-stay estimation) and then investigate interpretability methods to make the model predictions understandable to clinicians.

Interpretable Multi-Agent Systems with DOAgent

Multi-agent systems (MAS) and Self-Adaptive Systems (SAS) are used across robotics, resource management, and autonomous computing, yet understanding why agents make particular decisions remains an open challenge. When multiple agents interact through shared environments, the resulting behaviour is difficult to trace, attribute, and explain. DOAgent is a Python library that addresses this gap by treating shared data as the primary interface between agents, automatically recording decisions, state transitions, and contributions so that agent behaviour can be analysed after execution. This project proposes to reproduce an existing multi-agent or self-adaptive system from the literature using DOAgent, and then explore interpretability approaches on the recorded agent interactions.

Machine Learning (Bayesian Methodology, Inference and Applications)

Students interested to work with me should come up with a grain of an idea before reaching out. If there is a match I would be happy to discuss to flesh out the details and create a project out of it. I always believed that part of doing a project is coming up with ideas and angles ripe for exploration.

I am broadly interested in probabilistic machine learning and applications in climate science.