The broader impact/commercial potential of this I-Corps project is the development of the explainable Artificial Intelligence (XAI) methods for healthcare data. Currently, the number of electronic medical records is increasing while machine learning and deep learning models, especially large language models, have been employed to address healthcare needs. However, the healthcare domain is highly regulated and explainability for the black-box AI model becomes increasingly critical for any AI application. Users need to comprehend and trust the results and output created by machine learning algorithms. The proposed XAI technology may be used to describe an AI model, its expected impact, and potential biases. Further, the proposed technology may be used to transfer AI predictions into explainable medical interventions to enable the last mile delivery of AI in healthcare The commercial potential of these technologies may impact three major groups: health insurance companies who may provide better care management interventions and achieve personalized care delivery based on XAI; health analytic companies who rely on explanation to further enhance their products and meet the government regulations; and medical device startups who demand explainable analytical outputs based on the collected data from medical devices to enrich their user experience.
This I-Corps project is based on the development of explainable Artificial Intelligence (XAI) methods applied to the healthcare industry. Providing explainability is critical for AI health applications. Healthcare is a unique domain with multimodality data: tableau data about patient demographic information, textual data about medical notes, time series data about vital sign measures, images about medical scan, and wavelet data about EEG and ECG. To provide a holistic view of these data, deep learning is used to create universal embeddings on different modalities of data and build the prediction models for health risks. But deep learning methods lack transparency and demand explainability. The proposed technology combines integrated gradients with ablation studies to identify the contributing factors of different data components in the explanation. In addition, the proposed platform adds knowledge graphs into the prediction and explanation workflow to detect the relationships between contributing features to generate an explanation with a holistic view, and translates weights or feature importance into risk scores to enable the last mile delivery of AI in healthcare. The proposed XAI method may be used to explain the importance of input data components, identify the contributing features at the individual patient level and the patient cohort level; scale and save computational resources; and self-improve by using reinforcement learning to enhance positive feedback.