Overview
Intensive care units (ICUs) generate vast volumes of patient data, yet clinicians often lack tools to translate this data into timely, trustworthy decisions. The aICU project, which aims to support safe, interpretable, and clinically meaningful decision-making by establishing a standardised pipeline for developing, evaluating, and deploying AI in critical care. This project proposes to reproduce existing machine learning models that address specific ICU problems (e.g., mortality prediction, sepsis detection, ventilator weaning, or length-of-stay estimation) and then investigate interpretability methods to make the model predictions understandable to clinicians.
FAQs
- What will I learn in this Project?
You will learn about machine learning for healthcare, clinical decision support, and interpretability techniques for high-stakes prediction models. You will gain practical experience working with real-world intensive care data and evaluating model explanations in a clinical context.
- What is the objective of the project?
You will select and reproduce one or more published machine learning models for an ICU prediction task using standardised critical care datasets (e.g., MIMIC). The first step is replicating the reported results and validating them against the original baselines. You could then apply and compare interpretability techniques (e.g., SHAP, attention-based explanations, or concept-based methods) to understand which features drive predictions and whether these explanations align with clinical knowledge.
- How does this fit into the bigger picture?
This project is part of the aICU project, initiated at Karolinska Institutet in partnership with the University of Cambridge, aims to support safe, interpretable, and clinically meaningful decision-making by establishing a standardised pipeline for developing, evaluating, and deploying AI in critical care. This project also supports the Self-Sustaining Software Systems (S4) research agenda and contributes to ongoing work on interpretable AI for healthcare. The broader goal is to establish trustworthy AI pipelines where model predictions can be explained, audited, and safely integrated into clinical workflows.