Université de Caen

* PhD position on explainable AI (XAI) *

Candidate profile and practical details
We are looking forward to welcoming a student for a PhD in the field of
explainable AI. The candidate should be motivated and hardworking.
Background in computer vision, deep learning, statistics are
desirable. The PhD will take place in the GREYC laboratory, in Caen
https://www.greyc.fr/en/home/. It will be co-supervised by Frederic
Jurie (frederic.jurie@unicaen.fr) and Loic Simon

With the advent of extremely efficient neural networks, and their
pervasive use in modern AI systems, the research community has
questioned their reliability for high stakes decision making [2]. More
importantly, some legal regulators have taken steps to ensure that
users of AI decision systems have the right to obtain explanations
from the systems under consideration. This is for instance the case in
Europe through the General Data Protection Regulation (*) (GDPR). Such
regulation may certainly impede the integration and development of
high performance AI products unless the functioning of underlying
systems can be robustly explained. And such is the goal of so-called
Explainable Artificial Intelligence (a.k.a XAI) [1].
The need for interpretation capability over deep neural networks
functioning has been realized by the research community quite soon.
The main line of research concerns post-hoc analysis where a fully
trained network is scrutinized so as to expose its inner workings.
However, post-hoc explanation is far from being sufficient since it may
misinterpret an entirely unreasonable decision making process and
present it in a convincingly reasonable one and vice versa. This
stumbling block was underlined in a high impact article [2] which drew
the reader’s attention on how confounding factors as well as biases in
training datasets can induce misinterpretations. The article advocates
for the utter necessity of intrinsic interpretability of XAI systems.
Such systems, also referred to as explainable by-design must enforce
an easy interpretation of their decision making from the start (as
opposed to after-the-fact like in post-hoc explanation). The topic of
this PhD will be in this direction (XAI by design).

(*)  https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&

[1] Arun Das and Paul Rad. Opportunities and challenges in explainable
artificial intelligence (xai): A survey. arXiv preprint
arXiv:2006.11371, 2020.
[2] Cynthia Rudin. Stop explaining black box machine learning models
for high stakes decisions and use interpretable models instead. Nature
Machine Intelligence, 1(5):206–215, 2019.

Leave a Reply

Your email address will not be published. Required fields are marked *