Explainable AI in health is in its early stages and can be more complex given the highly heterogeneous real-world clinical setting with inbuilt uncertainty. In healthcare, AI comes in the form of human-centered approach which facilitates doctors during the decision process of diagnosis and treatment through clinical decision support systems (CDSS) which are often based on existing knowledge bases and predefined inference rules. With the unprecedented success of deep neural networks which outperform existing machine learning models implemented in clinical settings, explainable AI should meet not only the criteria of human comprehension, but also the ethical and legal expectations to avoid infringement on patient's rights and automation. SHAP (SHapley Additive explanations) has been calculated in clinical practice to understand patient risk prediction. This framework can generate counterfactual outcomes to explain the predictions of a machine learning model. But for deep learning models, more advanced post-hoc explanation methods are needed. For example, DeepSHAP has been developed by modifying the DeepLift algorithm to estimate the relative importance (i.e., Shapley score) of input features through comparing the activations caused by a reference input. The application of existing post-hoc explanation models requires significant readjustment to the specific clinical settings. In healthcare, shared semantics including biomedical ontologies and controlled vocabularies (e.g., SNOMED CT, ICD-10, LOINC, RxNorm) have been widely implemented in the clinical decision-making systems. With the benefits of formal data semantics and rich knowledge encoded in the biomedical ontologies, knowledge graphs can be formed to better facilitate explainability of AI. For instance, the interconnections and interdependency of input features can be identified and explained through knowledge graphs. This workshop aims to explore this timely topic with domain experts, researchers, and practitioners to have a deep discussion about these critical issues.
The workshop will be open for the whole conference. Each submitted paper will be evaluated by three reviewers from the aspects of novelty, significance, technique sound, experiments, and presentations. The reviewers will be program committee members or researchers recommended by the members.
The selected workshop papers will be extended and published by Journal of Data Intelligence.
All papers submitted should have a maximum length of 8 pages and demo papers should be no more than 4 pages. All must be prepared using the ACM camera-ready template. Authors are required to submit their papers electronically in PDF format.
Explainable AI applications in Healthcare
Post-hoc explainable AI methods for healthcare problems
Knowledge graph for explainable AI in health
Counterfactual analysis for AI in health
Metrics for explainable AI in health
Legal and ethical issues related to explainable AI in health
Gradient methods and SHAP evaluation for AI in health
Bias in explainable AI in health
Cross-modality explanation in health
Time-series related explanation for patient risk prediction
Feature, layer, and neuron attribution analysis on AI in health
Novel frameworks for XAI visualizations