mudefence

Muhammad Mursil defended his PhD

Interpretable Predictive Modelling for Multi-domain Healthcare Outcomes and  Insights

Abstract:  Modern healthcare faces a critical need for predictive models that can reliably guide clinical decisions, yet many state-of-the-art artificial intelligence (AI) approaches remain “black boxes”. Current machine learning (ML) and deep learning (DL) models often achieve high accuracy but provide limited transparency. They typically predict outcomes without explaining why they occur or how those outcomes would change under different interventions (the “what-if” scenarios). Furthermore, models trained on narrow datasets often fail to generalize across different hospitals or patient populations, limiting their real-world reliability. These challenges call for a shift from focusing solely on accuracy to developing decision-oriented AI that is transparent and interpretable.

In response, this thesis develops interpretable ML and DL approaches that not only enable early prediction of health outcomes, but also uncover key contributing factors and estimate potential intervention effects. It introduces methods to deliver timely, explainable, and actionable decision support across diverse healthcare domains. Rather than treating AI models as opaque black boxes, the proposed methods provide human-comprehensive rationales for predictions, bridging the gap between predictive analytics and clinical decision-making. To ensure broad applicability, we validate these methods in two distinct settings, maternal-foetal health and dementia, spanning diverse patient populations, data modalities, and timescales. This evaluation demonstrates robustness to distribution shifts and practical utility in real-world care.

In maternal–foetal medicine, we present ML and DL models for early antenatal risk assessment, predicting neonatal birth weight (BW) months before delivery. The models integrate routine maternal health indicators with underexplored nutritional and genetic biomarkers to improve low BW prediction. We first benchmarked traditional machine learning approaches for BW prediction, then advanced to DL with single- and multimodal inputs to enhance performance and analyse each input’s impact on BW. Motivated by underperformance and challenges in multimodal fusion, we designed a transformer-based multi-encoder that fuses disparate data while preserving each modality’s contribution, enabling accurate forecasts even where ultrasound is unavailable. Crucially, the model remains interpretable, revealing how maternal factors shape foetal growth and supporting personalised prenatal care.

In addition, we address dementia prediction through multi-omics biomarker analysis. We introduce an interpretable ML and DL model that combines blood-based multi-omics data (profiles of plasma proteins and metabolites) with conventional clinical risk factors to predict an individual’s likelihood of developing dementia years before onset. We also identify novel and compact panels of protein and metabolite biomarkers that could enable early screening and subtype-specific risk stratification. The model achieves robust predictive performance while highlighting biologically meaningful features in data. Its interpretable outputs suggest avenues for targeted preventive interventions and clinical trial enrichment.

These contributions illustrate how incorporating interpretability and transparency into AI models can enhance clinical decision support across diverse healthcare scenarios. Beyond improving predictive accuracy, the thesis’s innovations, from integrating underexplored maternal factors (e.g., nutritional and genetic) through transformer-based multimodal data fusion to identifying novel multi-omic biomarkers for dementia risk stratification, showcase a path toward AI systems that not only anticipate risk but also explain their reasoning and explore “what-if” scenarios. This thesis highlights the broader significance of interpretable AI in fostering more proactive, personalized, and evidence-driven healthcare.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>