Robust Explainable AI: the Case of Counterfactual Explanations

Spotlight tutorial at the 26th European Conference on Artificial Intelligence (ECAI 2023)

Tutorial date and time

1 October 2023, 11:00 - 12:30
Faculty of Physics, Astronomy and Applied Computer Science and Faculty of Mathematics (A-1-08)

Summary

Counterfactual explanations (CXs) are routinely used to shed light on the decisions of machine learning models; however, CX generation strategies often lack robustness, which may jeopardise their explanatory function. This tutorial aims at introducing Robust Explainable AI, a rapidly growing field that offers novel solutions to alleviate this problem and improve the trustworthiness of CXs.

Outline

The tutorial will begin with a brief introduction to neural networks trained to solve classification tasks. We will then introduce CXs, together with the most common approaches used to compute them, including exact and approximate methods. We will then introduce the problem of robustness, illustrate different reasons behind the lack of robustness, and overview some approaches that have been proposed within the last couple of years to address this issue.

Detailed outline:

  1. Neural networks and (binary) classification tasks;
  2. CXs and recourse (definitions and examples);
  3. Common approaches to compute CXs (exact and approximate);
  4. Robustness of CXs: causes;
  5. Robustness of CXs: implications;
  6. Taxonomy of existing notions of robustness;
  7. A bird's eye view on existing solutions;
  8. Robust XAI and other areas of AI/CS (open discussion, $\sim$15m).

About the speaker

Francesco is a Research Fellow within the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four year effort devoted to the formal study of robustness issues arising in XAI. For more details about Francesco and his research please visit this link.

Recent publications by the speaker on the topic of this tutorial include:

  1. Robust Explanations for Human-Neural Multi-agent Systems with Formal Verification. F. Leofante, A. Lomuscio. The 20th European Conference on Multi-Agent Systems (EUMAS 2023).
  2. Counterfactual Explanations and Model Multiplicity: a Relational Verification View. F. Leofante, E. Botoeva, V. Rajani. The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR 2023).
  3. Towards Robust Contrastive Explanations for Human-Neural Multi-agent Systems. F. Leofante, A. Lomuscio. The 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023).
  4. Formalising the Robustness of Counterfactual Explanations for Neural Networks. J. Jiang*, F. Leofante*, A. Rago, F. Toni. The 37th AAAI Conference on Artificial Intelligence (AAAI 2023). * Equal contribution.

Resources

Slides used for the tutorial are available here.

Acknowledgements

This work has received funding from Imperial College London under the Imperial College Research Fellowship scheme.