Causality, Argumentation, and Machine Learning
Project description
While traditional machine learning methods have led to great advances in artificial intelligence, there are a number of obstacles complicating their wider adoption. One is the inability of learned models to handle circumstances that the model was not trained for, which is referred to as robustness. Another is the lack of explainability. Many modern machine learning methods are based on black box models that are unable to explain their predictions, severely hindering their adoption in high-risk situations such as medical diagnosis. Pearl argues that these obstacles arise from the fact that current machine learning methods are associational learning systems which do not attempt to understand cause and effect relationships. Thus, in order to overcome these obstacles, machines must be equipped with the ability to reason about cause and effect. This view has gained significant support in recent years, witnessed by many novel approaches to integrate causality into machine learning.
The overall objective of the project CAML2 is to use formal argumentation techniques for causal machine learning. While large amounts of observed data are increasingly easy to obtain, it is well known that causal relationships cannot, in general, be discovered on the basis of observed data alone. This is a process that depends on additional knowledge, such as background knowledge provided by domain experts or results of controlled experiments. A machine that reasons about cause and effect must therefore interact with users. Given a query formulated by the user, it must be able to explain an answer, show which additional knowledge is necessary to arrive at an answer, and be able to integrate knowledge that is provided by the user. Formal argumentation provides tools that facilitate this kind of interaction. In formal argumentation, answers to queries are accompanied by a dialectical analysis showing why arguments in support of a conclusion are preferred to counterarguments. Such an analysis acts as an explanation for an answer, and can furthermore be made interactive, by allowing users to pinpoint errors or missing information and to formulate new arguments that can be integrated into the analysis. More concretely, we will develop an approach that allows for learning causal models from observed data and, in particular, to reason about conflicting causal models using arguments and counterarguments for and against certain models, respectively. The approach allows for the injection of background knowledge and expert information can be elicited through an interactive dialogue system. To address problems pertaining to uncertainty and noisy information, we not only consider qualitative but also quantitative (e.g., probabilistic) approaches to formal argumentation. An important notion in causal discovery is that of an intervention, i.e., a manual setting of a variable to a certain value. This allows to guide the learning algorithm in the appropriate direction. We will model interventions through the use of counterfactual reasoning methods of formal argumentation, yielding a comprehensive framework for interactive argumentative causal discovery. We will focus on the setting of supervised learning, but also consider unsupervised settings, in particular, clustering and density estimation.
Project leader
People
Project duration
November 2021 - March 2025
Predecessor project
CAML - Argumentative Machine Learning
Publications
- Lars Bengel, Julian Sander, Matthias Thimm. Characterising Serialisation Equivalence for Abstract Argumentation. In Proceedings of the 27th European Conference on Artificial Intelligence (ECAI'24). October 2024. bibtex pdf
- Florian Peter Busch, Moritz Willig, Jonas Seng, Kristian Kersting, and Devendra Singh Dhami. $\psi$net: Efficient Causal Modeling at Scale. In International Conference on Probabilistic Graphical Models (PGM'24), September 2024.
- Harsh Poonia, Moritz Willig, Zhongjie Yu, Matej Zečević, Kristian Kersting, Devendra Singh Dhami. $\chi$SPN: Characteristic Interventional Sum-Product Networks for Causal Inference in Hybrid Domains. In The 40th Conference on Uncertainty in Artificial Intelligence (UAI'24). July 2024.
- Moritz Willig, Matej Zečević, and Kristian Kersting. “Do Not Disturb My Circles!” Identifying the Type of Counterfactual at Hand (Short Paper). In Conference on Advances in Robust Argumentation Machines, (RATIO'24). June 2024.
- Anita Keshmirian, Moritz Willig, Babak Hemmatian, Kristian Kersting, Ulrike Hahn, and Tobias Gerstenberg. Chain Versus Common Cause: Biased Causal Strength Judgments in Humans and Large Language Models. In Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 46 (CogSci'24). April 2024.
- Moritz Willig, Matej Zečević, Devendra Singh Dhami, and Kristian Kersting. Do Not Marginalize Mechanisms, Rather Consolidate!. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS'23). November 2023.
-
Isabelle Kuhlmann, Matthias Thimm. A Discussion of Challenges in Benchmark Generation for Abstract Argumentation. In Oana Cocarascu, Svlvie Doutre, Jean-Guv Mailly, Antonio Rago (Eds.), Proceedings of The First International Workshop on Argumentation and Applications (Arg&App'23). September 2023.
bibtex
pdf
-
Lydia Blümel, Matthias Thimm. Approximating Weakly Preferred Semantics in Abstract Argumentation through Vacuous Reduct Semantics. In Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR'23). September 2023.
bibtex
pdf
-
Lars Bengel, Matthias Thimm. Towards Parallelising Extension Construction for Serialisable Semantics in Abstract Argumentation. In Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR'23). September 2023.
bibtex
pdf
- Matej Zečević, Moritz Willig, Devendra Singh Dhami, and Kristian Kersting. Causal Parrots: Large Language Models May Talk Causality But Are Not Causal. Transactions on Machine Learning Research (TMLR). September 2023.
- Moritz Willig, Matej Zečević, Jonas Seng, and Florian Peter Busch. Causal Concept Identification in Open World Environments. In AAAI Bridge Program on Continual Causality. pp. 52-58. June 2023.
- Zečević Matej, Willig Moritz, Florian Peter Busch, Jonas Seng. Continual Causal Abstractions. In AAAI Bridge Program on Continual Causality., pp. 45-51. June 2023.
- Jonas Seng, Florian Peter Busch, Matej Zečević, and Moritz Willig. Treatment Effect Estimation to Guide Model Optimization in Continual Learning. In AAAI Bridge Program on Continual Causality 2023., pp. 38-44. June 2023.
- Florian Peter Busch , Jonas Seng, Moritz Willig, and Matej Zečević. Continually Updating Neural Causal Models. In AAAI Bridge Program on Continual Causality., pp. 30-37. June 2023.
- Jonas Klein, Isabelle Kuhlmann, Matthias Thimm. Graph Neural Networks for Algorithm Selection in Abstract Argumentation. In Proceedings of the First International Workshop on Argumentation & Machine Learning (ArgML'22). September 2022. bibtex pdf
- Tjitze Rienstra, Jesse Heyninck, Gabriele Kern-Isberner, Kenneth Skiba, Matthias Thimm. Explaining Argument Acceptance in ADFs. In Proceedings of the First International Workshop on Argumentation for eXplainable AI (ArgXAI'22). September 2022. bibtex pdf
- Lars Bengel, Lydia Blümel, Tjitze Rienstra, Matthias Thimm. Argumentation-based Causal and Counterfactual Reasoning. In Proceedings of the First International Workshop on Argumentation for eXplainable AI (ArgXAI'22). September 2022. bibtex pdf
- Lukas Kinder, Matthias Thimm, Bart Verheij. A labelling-based solver for computing complete extensions of abstract argumentation frameworks. In Proceedings of the Fourth International Workshop on Systems and Algorithms for Formal Argumentation (SAFA'22). September 2022. bibtex pdf
-
Lars Bengel. Towards Learning Argumentation Frameworks from Labelings. In Proceedings of the 1st International Conference on Foundations, Applications, and Theory of Inductive Logic (FATIL'22). October 2022. pdf
- Isabelle Kuhlmann, Thorsten Wujek, Matthias Thimm. On the Impact of Data Selection when Applying Machine Learning in Abstract Argumentation. In Proceedings of the 9th International Conference on Computational Models of Argument (COMMA'22). September 2022. bibtex pdf
- Lars Bengel, Matthias Thimm. Serialisable Semantics for Abstract Argumentation. In Proceedings of the 9th International Conference on Computational Models of Argument (COMMA'22). September 2022. bibtex pdf
- Lydia Blümel, Matthias Thimm. A Ranking Semantics for Abstract Argumentation based on Serialisability. In Proceedings of the 9th International Conference on Computational Models of Argument (COMMA'22). September 2022. bibtex pdf
-
Matthias Thimm. Revisiting initial sets in abstract argumentation. In Argument & Computation. July 2022.
bibtex
pdf