Safe and Explainable AI-enabled Decision Making for Personalized Treatment
The project is focused on the design and implementation of AI-based clinical decision support systems for personalized treatment and management recommendations. The proposed research spans three areas. First, it addresses AI foundations—problems in trustworthy medical AI, such as integrating medical domain knowledge in learning models effectively, making recommendations of AI algorithms explainable to clinicians, and establishing worst-case safety guarantees. The second focus is on AI systems—infrastructure to facilitate development of explainable models suitable for integration into clinician workflows. Thirdly, the project looks at AI use cases—representative clinical challenges that span inpatient and outpatient use cases, including prediction of in-hospital cardiac arrest, timely diagnosis and prediction of the need for intervention for sepsis, and prediction of response to neoadjuvant or adjuvant chemotherapy for breast cancer patients. To execute this agenda, the team brings together clinicians and researchers with expertise spanning AI, biostatistics, data science, and machine learning.