Sepsis is a blood infection, it is life-threatening and can lead to death and thus there are efforts to detect sepsis early. Each hour of delay in sepsis diagnosis increases patient mortality by about seven percent.

To diagnose sepsis, doctors perform a variety of tests such as: bacteria in your blood or other body fluids, signs of infection on images, a high or low white blood cell count, a low number of platelets in a patient’s blood, low blood pressure, too much acid in your blood, a lack of blood oxygen, problems with blood clots, uneven levels of electrolytes, kidney or liver problems.

Critical care patients generate huge volumes of data about their current state which is a challenge for clinicians to digest.

Suppose computers could help clinicians analyze the data and detect diseases like sepsis early, before patients are deathly ill. That's what we're working towards.

Artificial Intelligence and Machine Learning at the hospital bedside with Edge Computing

Artificial Intelligence (AI) and Machine Learning (ML) are fields of computing that enable decision making. The goal of AI is to create intelligent systems to perform a task like a human. Machine learning is a subset of AI which allows a computer to automatically learn from data without explicit programming. 

Thus AI/ML may be able to help diagnose patients who are at early risk for diseases like sepsis, and start a treatment faster based on data of what has been effective in other patients. 

In a distributed deployment model, healthcare organizations can deploy an AI model close to the patient, often at the bedside. This is an example of an edge computing strategy, placing computing closer to the source of data or the user.

How does a computer program help diagnose a condition that is traditionally diagnosed using blood tests and imaging?

Methodology

An AI/ML solution uses data to train the computer program or model to make a decision, in this case predict which patients might be susceptible to sepsis and thus help clinicians detect sepsis early and treat it before patients are at organ collapse or near death. 

To train the computer model, also known as supervised learning, data is fed from patients who were previously diagnosed with sepsis. A wide variety of data is used across genders, race and ethnicity, age, and economic strata. Data is sourced from anonymized data sets to avoid violating any privacy concerns. 

The computer model will examine the data from these patients and look for patterns from structured data such as: respiration rate, oxygen saturations, vital signs rate and level of consciousness — that are routinely collected by nursing staff.

It's also possible to analyze unstructured data. Imagine a physician trying to read hand written or computer typed medical notes of thousands of other sepsis patients to help him/her diagnose if a patient has sepsis. 

The computer model analyzing thousands of anonymized patient data sets, that clinicians simply don’t have the time to review, can help answer questions such as

  1. A patient has systemic inflammatory response syndrome (SIRS), but since SIRS can be present in non-sepsis patients, does the computer model believe this patient has sepsis?

  2. When should antibiotic treatment begin?

  3. Analyzing thousands of other patient conditions, did the patient have sepsis before organ dysfunction?

The computer model monitors volumes of data throughout the day, and when combinations of lab data, vital signs etc that are consistent with sepsis are detected, the system alerts clinicians to a checklist and recommended treatment steps to quickly intervene. This is real-time diagnostics for patient care at the bedside enabled by AI/ML.

sepsis ai/ml model

Accelerating sepsis detection with Red Hat  

To help accelerate the detection of sepsis requires the right technologies at the hospital bedside. For example, you might put together a solution with Red Hat OpenShift, and Red Hat OpenShift Data Science that communicate with an edge computing architecture built on KubeFrame, together with Red Hat’s software with HPE hardware and NVIDIA GPU processors. This could create a sandbox environment for data scientists to develop, train and test machine learning models and deploy them for use in intelligent applications

The Machine Learning Workflow

The workflow begins with gathering and preparing data. Once the data has been gathered, cleaned and processed, the second stage of the ML workflow can begin. Data scientists train a range of models, and compare performance while considering trade-offs such as prediction accuracy, time to process, and memory constraints. After model training, the next step of the workflow is production. 

OpenShift Data Science enables the JupyterLab service by default, allowing users to develop models and implement analytic techniques in Jupyter Notebooks. Users can load Red Hat provided container images and develop models using the latest frameworks, including scikit-learn and XGBoost.

The Red Hat Marketplace has a range of certified AI/ML offerings. These offerings can be combined with the Red Hat OpenShift Data Science add-on to enable a wider AI/ML ecosystem. Integration with Red Hat OpenShift Streams for Apache Kafka allows data scientists to test and develop models on streaming data.

Sepsis detection is only one use case for AI/ML. Your organization may not be in the business of detecting life-threatening diseases, but AI/ML can help in other ways. Learn more by trying OpenShift or to learn more about Red Hat OpenShift Data Science, Red Hat's Chris Chase has a great demo video here that really conveys the value of this new service. You can also learn more by visiting our OpenShift Data Science page


About the author

Jonathan Gershater joined Red Hat in 2013. Prior to Red Hat, Gershater worked at Trend Micro, Sun Microsystems, Entrust Technologies and 3Com. At Red Hat, Gershater leads market analysis for Red Hat’s open hybrid cloud platform, OpenShift, and related technologies.

Read full bio