Université d’Artois

A Ph.D. position in Artificial Intelligence and Machine Learning is open at the CRIL lab. This Ph.D. fellowship shall be funded for 3 years from October 2022.

Title: Learning Interpretable Circuits

Overview: In recent years, there has been a growing interest in the design of statistical learning algorithms for interpretable models, which are not only accurate but also understandable by human users. A related and desirable property is explainability, which refers to the computational ability of predictive models to explain their predictions in intelligible terms.
In this setting, decision trees are of paramount importance, as they can be easily read by recursively breaking a choice into sub-choices until a decision is reached. Although the problem of learning decision trees is NP-hard, several optimization approaches have been proposed for identifying a decision tree that minimizes some regularized risk objective (e.g. [1,2,3]).

Beyond decision trees, various classes of (boolean or arithmetic) circuits are also endowed with interpretability and explainability properties. These models are indeed able to efficiently handle a wide variety of explanation queries, by exploiting the decomposability property on the logical “and’’ gates, and the determinism of “or’’ gates [4]. Well-known classes of interpretable circuits include ordered decision diagrams, affine decision trees, and decomposable negation normal form representations.

Although the problem of learning circuits has been studied in the literature (e.g. [5,6]), very little is known about learning interpretable circuits. The focus of this thesis proposal is to address this issue, by exploring several directions of research. One of them is to identify the learnability of several subclasses of interpretable circuits. Another important perspective is to exploit constraint-based or MILP solvers for extracting optimal – or quasi-optimal – interpretable circuits. Finally, since decision circuits can easily be extended to semi-ring networks, a final topic is to extend results about binary classification to other learning tasks, such as regression or density estimation.  

References:
[1] Nina Narodytska, Alexey Ignatiev, Filipe Pereira, João Marques-Silva: Learning Optimal Decision Trees with SAT. IJCAI 2018: 1362-1368.
[2] Emir Demirovic, Anna Lukina, Emmanuel Hebrard, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Peter J. Stuckey: MurTree: Optimal Decision Trees via Dynamic Programming and Search. J. Mach. Learn. Res. 23: 26:1-26:47 (2022).
[3] Valentina Zantedeschi, Matt J. Kusner, Vlad Niculae: Learning Binary Decision Trees by Argmin Differentiation. ICML 2021: 12298-12309.
[4] Gilles Audemard, Frédéric Koriche, Pierre Marquis: On Tractable XAI Queries based on Compiled Representations. KR 2020: 838-849.
|5] Nathan Linial, Yishay Mansour, Noam Nisan. Constant depth circuits, Fourier transform, and learnability. FOCS 1989: 574–579.
[6] Eran Malach, Shai Shalev-Shwartz. Learning Boolean Circuits with Neural Networks. CoRR abs/1910.11923 (2019).

Profile: The candidate should have a Master or similar degree in computer science. As the thesis proposal lies at the intersection of explainable artificial intelligence, combinatorial optimization, and machine learning, the candidate should have a strong background in some of these topics.
Contacts: Frederic Koriche (koriche@cril.fr), Jean-Marie Lagniez (lagniez@cril.fr) and Stefan Mengel (mengel@cril.fr).

Leave a Reply

Your email address will not be published. Required fields are marked *