Skip to content

Latest commit

 

History

History
85 lines (62 loc) · 3.4 KB

README.md

File metadata and controls

85 lines (62 loc) · 3.4 KB

streamlit-llm

Understanding Medical Observations with Large Language Models

The purpose of this application is to test LLM-generated interpretations of medical observations. The explanations are generated fully automatically by a large language model. This application should be used for experimental purposes only. It does not provide support for real world cases and does not replace advice from medical professionals.

https://experimental-clinical-support.streamlit.app/

Architecture

app-architecture-image

  • Streamlit caching loaded Synthea synthetic medical data
  • Prompting with LangChain PromptTemplate
  • LLM served by Clarifai backend

Run the app

pip install -r requirements.txt
streamlit run Main.py

Data source

https://synthetichealth.github.io

Discussion

  • Consider changes of values over time instead of interpreting each time point individually
  • Interpret measurements with context (age, gender)
  • Consider different national diagnostic criteria
  • Preserving privacy while interacting with LLMs
  • Parse LLM output and process for further queries
  • Warning labels according to clinical significance of the results
  • Guardrails to guide user input

Prompt engineering

Problem: Hallucinations

Respiratory rate: This is the number of breaths a person takes per minute. A normal respiratory rate for an adult at rest is between 12 and 20 breaths per minute. In this case, the patient's respiratory rate is 14 breaths per minute, which is slightly higher than normal.

Input: Erythrocyte distribution width [Entitic volume] by Automated count, Platelet distribution width [Entitic volume] in Blood by Automated count [...] Output: So, in simple terms, these observations are telling us that the red blood cells in this patient's blood are spread out over a pretty big range of sizes, and the machine that counted them got a slightly higher measurement than the manual count.

Hackathon resources

Evaluation criteria

  • Inventive
  • Error-free
  • Public repo
  • Hosted on community cloud
  • Tools: LangChain, Clarifai, ...
  • LLM pain points: transparency, trust, accuracy, privacy, cost reduction, ethics

Learning resources