Artificial Intelligence has a long tradition in computer science dating back to the 1950s and has regained interest due to the practical success of Machine Learning (ML) techniques. The goal is to create usable AI systems that can learn from data, extract and generalize knowledge, and extract underlying factors that can be used to explain data. The selection of appropriate features and quality of data and its interpretation is always the main challenge as these factors give the best results.
Finding the balance between Machine Learning explainability & performance
Medical decision support is increasingly relying on ML models due to their flexibility and data-driven predictive power with impressive results even without domain expertise. However, applications in routine clinical care are rare. This is partly because confidence in the security of these algorithms is lacking and because acceptance of complex models requires user interfaces that are easily understood by clinicians. Depending on the ML method, the explanations are clear from the method or inherently difficult to obtain. The development of explainable AI systems usually has to strike the right balance between hard-to-explain but high-performing ML methods and well-explained but lower-performing ML methods.
What options do I have to make my ML models explainable?
ML model interpretability has been previously addressed for Multi-Layer-Perceptron (MLP), Support Vector Machine, Fuzzy Logic, and deep neural. Approaches to explaining ML algorithms currently fall into the following categories: (a) Feature attribution, (b) saliency maps, (c) activation maximization, and (d) metric learning. At the same time, there are several approaches available to create explainable prediction systems that are classified as ante-hoc (using models that are interpretable by design, e.g. partial response networks evolutionary fuzzy modeling) and post-hoc (black-box model interpretation methods, e.g. activation maximization, rule extraction, fisher networks).
Taking a step forward: The ASCAPE approach
ASCAPE is developing an AI-support framework to predict and improve the Quality-of-Life of cancer patients to be used by doctors. To this end, we apply a user-centered approach to develop the explanation system for ASCAPE models by adopting functionality, functionality, usability, security, and validation requirements to meet the needs of target stakeholders. Providing the explanation capabilities in the ASCAPE platform is a process in the field of tension created by the requirements identified by the stakeholders and the application context on the one hand. The second driving force is the need for models with high predictive quality and explanation acquisition capabilities depending on the ML algorithms used to train the models. Actually, progress beyond the state-of-the-art of explainable AI is driven by these two factors. Specifically, ACSAPE has:
- Identified key determinants of impacts on quality of life: We investigated how reversing the information flow in the model by mapping output to input variables can help identify data points in patient data that impact patient QoL. Through explainability we can also assess the significance of a data point and thus assess a proposed intervention on its expected effectiveness.
- Used explainability to predict the model outputs: Based on the previous advancement, we explored how the explainability can be alleviated to have a predictive capability. For example, in combination with Monte-Carlo-Methods, we used explainability to identify medical interventions and recommend them to physicians.
- Combined explainability in ML with human knowledge: ASCAPE researchers are working on how to feed knowledge such as fitness or lifestyle habits into methods to create explanations that can be better used by medical professionals