Medizinische Informatik, Statistik und Dokumentation

Research focus on Human-Computer-Interaction for Medicine and Healthcare

PI: Andreas Holzinger

Focus: Artificial intelligence has been remarkably successful due to advances in statistical machine learning, even surpassing human performance in certain tasks in medicine. However, the complexity of such approaches often makes it impossible to understand why an algorithm arrived at a particular result. The focus of our research is on comprehensibility and thus interpretability. This is where a human-in-the-loop can be helpful, because human experts can contribute experience, contextual understanding, implicit and conceptual knowledge.

Cooperation: The Research Team Holzinger cooperates intern with the Diagnostic and Research Institute of Forensic Medicine and international with the xAI-Lab of the Alberta Machine Intelligence Institute, Edmonton, the Life Sciences Discovery Center Toronto in Canada, and with the Human-Centered AI Lab at the University of Technology, Sydney, Australia.

Projects

Explainable-AI

  • The FWF research project P-32554 "A Reference Model of Explainable Artificial Intelligence (AI) for Medicine" is working on fundamental questions, e.g. why AI can solve some tasks better than human experts do, how AI arrived at the results, and what happens when input data are changed counterfactually. To this end, methods, explanatory patterns, and quality criteria for explainability and causal understanding of explanations are being developed.
  • Duration: 2019-2023
  • Funded by: FWF

Feature Cloud

  • As part of the EU-RIA project 826078 "Privacy preserving federated machine learning", where the goal is to exchange only learned representations (the feature parameters theta, hence the project name), the team works on distributed machine learning and contributes to explainability and interpretability of such approaches, in particular graph-based explainable AI and aspects of efficient human-AI interaction that support ethically responsible and legally defensible machine learning in medicine.
  • Duration: 2019-2024
  • Funded by: EU
  • Cooperation partners: TU Munic, Uni Hamburg, Uni Marburg, SBA-Research Vienna, University of South Denmark, Uni Maastricht, Research Institute Vienna, Gnome Design SRL

EMPAIA

  • In the project "Ecosystem for pathology diagnostics with AI support", the Austrian sister project of the German AI platform www.empaia.org, we are working together with the Institute of Pathology to make machine decisions in digital pathology transparent, traceable and interpretable for medical experts. Novel human-AI interfaces trained by medical experts will improve reliability, accountability, fairness and trust in AI and promote ethically responsible machine learning.
  • Duration: 2020-2023
  • Funded by: FFG
  • Cooperation partners: TU Berlin, Charité Berlin

Principal Investigator

Andreas Holzinger  
T: +43 316 385 13883