Causality, Argumentation, and Machine Learning
Project description
While traditional machine learning methods have led to great advances in artificial intelligence, there are a number of obstacles complicating their wider adoption. One is the inability of learned models to handle circumstances that the model was not trained for, which is referred to as robustness. Another is the lack of explainability. Many modern machine learning methods are based on black box models that are unable to explain their predictions, severely hindering their adoption in high-risk situations such as medical diagnosis. Pearl argues that these obstacles arise from the fact that current machine learning methods are associational learning systems which do not attempt to understand cause and effect relationships. Thus, in order to overcome these obstacles, machines must be equipped with the ability to reason about cause and effect. This view has gained significant support in recent years, witnessed by many novel approaches to integrate causality into machine learning.
The overall objective of the project CAML2 is to use formal argumentation techniques for causal machine learning. While large amounts of observed data are increasingly easy to obtain, it is well known that causal relationships cannot, in general, be discovered on the basis of observed data alone. This is a process that depends on additional knowledge, such as background knowledge provided by domain experts or results of controlled experiments. A machine that reasons about cause and effect must therefore interact with users. Given a query formulated by the user, it must be able to explain an answer, show which additional knowledge is necessary to arrive at an answer, and be able to integrate knowledge that is provided by the user. Formal argumentation provides tools that facilitate this kind of interaction. In formal argumentation, answers to queries are accompanied by a dialectical analysis showing why arguments in support of a conclusion are preferred to counterarguments. Such an analysis acts as an explanation for an answer, and can furthermore be made interactive, by allowing users to pinpoint errors or missing information and to formulate new arguments that can be integrated into the analysis. More concretely, we will develop an approach that allows for learning causal models from observed data and, in particular, to reason about conflicting causal models using arguments and counterarguments for and against certain models, respectively. The approach allows for the injection of background knowledge and expert information can be elicited through an interactive dialogue system. To address problems pertaining to uncertainty and noisy information, we not only consider qualitative but also quantitative (e.g., probabilistic) approaches to formal argumentation. An important notion in causal discovery is that of an intervention, i.e., a manual setting of a variable to a certain value. This allows to guide the learning algorithm in the appropriate direction. We will model interventions through the use of counterfactual reasoning methods of formal argumentation, yielding a comprehensive framework for interactive argumentative causal discovery. We will focus on the setting of supervised learning, but also consider unsupervised settings, in particular, clustering and density estimation.
Project leader
People
Project duration
November 2021 - March 2025
Predecessor project
CAML - Argumentative Machine Learning
Publications
- Lars Bengel, Julian Sander, Matthias Thimm. Characterising Serialisation Equivalence for Abstract Argumentation. In Proceedings of the 27th European Conference on Artificial Intelligence (ECAI'24). October 2024. bibtex pdf
-
Isabelle Kuhlmann, Matthias Thimm. A Discussion of Challenges in Benchmark Generation for Abstract Argumentation. In Oana Cocarascu, Svlvie Doutre, Jean-Guv Mailly, Antonio Rago (Eds.), Proceedings of The First International Workshop on Argumentation and Applications (Arg&App'23). September 2023.
bibtex
pdf
-
Lydia Blümel, Matthias Thimm. Approximating Weakly Preferred Semantics in Abstract Argumentation through Vacuous Reduct Semantics. In Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR'23). September 2023.
bibtex
pdf
-
Lars Bengel, Matthias Thimm. Towards Parallelising Extension Construction for Serialisable Semantics in Abstract Argumentation. In Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR'23). September 2023.
bibtex
pdf
- Jonas Klein, Isabelle Kuhlmann, Matthias Thimm. Graph Neural Networks for Algorithm Selection in Abstract Argumentation. In Proceedings of the First International Workshop on Argumentation & Machine Learning (ArgML'22). September 2022. bibtex pdf
- Tjitze Rienstra, Jesse Heyninck, Gabriele Kern-Isberner, Kenneth Skiba, Matthias Thimm. Explaining Argument Acceptance in ADFs. In Proceedings of the First International Workshop on Argumentation for eXplainable AI (ArgXAI'22). September 2022. bibtex pdf
- Lars Bengel, Lydia Blümel, Tjitze Rienstra, Matthias Thimm. Argumentation-based Causal and Counterfactual Reasoning. In Proceedings of the First International Workshop on Argumentation for eXplainable AI (ArgXAI'22). September 2022. bibtex pdf
- Lukas Kinder, Matthias Thimm, Bart Verheij. A labelling-based solver for computing complete extensions of abstract argumentation frameworks. In Proceedings of the Fourth International Workshop on Systems and Algorithms for Formal Argumentation (SAFA'22). September 2022. bibtex pdf
-
Lars Bengel. Towards Learning Argumentation Frameworks from Labelings. In Proceedings of the 1st International Conference on Foundations, Applications, and Theory of Inductive Logic (FATIL'22). October 2022. pdf
- Isabelle Kuhlmann, Thorsten Wujek, Matthias Thimm. On the Impact of Data Selection when Applying Machine Learning in Abstract Argumentation. In Proceedings of the 9th International Conference on Computational Models of Argument (COMMA'22). September 2022. bibtex pdf
- Lars Bengel, Matthias Thimm. Serialisable Semantics for Abstract Argumentation. In Proceedings of the 9th International Conference on Computational Models of Argument (COMMA'22). September 2022. bibtex pdf
- Lydia Blümel, Matthias Thimm. A Ranking Semantics for Abstract Argumentation based on Serialisability. In Proceedings of the 9th International Conference on Computational Models of Argument (COMMA'22). September 2022. bibtex pdf
-
Matthias Thimm. Revisiting initial sets in abstract argumentation. In Argument & Computation. July 2022.
bibtex
pdf