Skip to content

ongoing project predicting ventilation risk in patients admitted to the icu using multmodal learning

Notifications You must be signed in to change notification settings

aelhussein/icu_ventilation_multimodal

Repository files navigation

icu_ventilation_multimodal

Ongoing project predicting ventilation risk in patients admitted to the icu using multmodal learning. we use a deep and cross network with joint fusion learning.

Overall we show that multimodal learning using joint fusion learning and a deep and cross network outperforms baseline neural network models and also exsiting published models.

Question

Mechanical ventilators are a crucial part of patient care for critically ill patients. However, their use is often resource constrained because of limited supply and available trained staff to operate them. The aim of this study is to use EHR and chest x-ray data to predict need for mechanically ventilated patients.

Dataset

We used the MIMIC-IV dataset

Cohort

We defined our cohort as patients diagnosed with pneumonia, respiratory failure, or acute respiratory distress syndrome (ARDS) who have been taken chest x-rays within 5 days of being admitted to the ICU, followed by an outcome of either invasive ventilation or death within 3 to 7 days after the observation window.

Features

Features from EHR were selected based on medical importance and include heart rate, blood pressure, CO2 pressure and saturation level, platelet count, temperature, and Glasgow coma scale (GCS) score. Features with missing values over 30 percent were dropped, and remaining features were imputed using random forest imputation. Standard scaler was used to scale continuous variables, and the minority positive class was augmented using Synthetic Minority Over-sampling Technique (SMOTE)

Modelling

We built a multimodal model that used CXR and EHR data. CXR images were processed using a pretrained resnet model that was further trained on chexpert images.

Joint learning was forced by propagating loss to the resnet model to ensure iterative learning:

image

We also employed a deep and cross network to enforce collaborative learning between image and structured data.

Results

A baseline EHR only model scored an AUC of 0.8 image

A baseline CXR only model scored an AUC of 0.76 image

The multimodal model scored an AUC of 0.85, suggesting and improvement in model performance image

A calibration plot showed the model was well-calibrated for most risk groups although it tended to underestimate risk in some high risk groups

image

About

ongoing project predicting ventilation risk in patients admitted to the icu using multmodal learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published