publications
2025
- "Two Means to an End Goal": Connecting Explainability and Contestability in the Regulation of Public Sector AITimothée Schmude, Mireia Yurrita, Kars Alfrink, Thomas Le Goff, and Tiphaine Viard2025
Explainability and its emerging counterpart contestability have become important normative and design principles for the trustworthy use of AI as they enable users and subjects to understand and challenge AI decisions. However, the regulation of AI systems spans technical, legal, and organizational dimensions, producing a multiplicity in meaning that complicates the implementation of explainability and contestability. Resolving this conceptual ambiguity requires specifying and comparing the meaning of both principles across regulation dimensions, disciplines, and actors. This process, here defined as translation, is essential to provide guidance on the principles’ realization. We present the findings of a semi-structured interview study with 14 interdisciplinary AI regulation experts. We report on the experts’ understanding of the intersection between explainability and contestability in public AI regulation, their advice for a decision subject and a public agency in a welfare allocation AI use case, and their perspectives on the connections and gaps within the research landscape. We provide differentiations between descriptive and normative explainability, judicial and non-judicial channels of contestation, and individual and collective contestation action. We further outline three translation processes in the alignment of top-down and bottom-up regulation, the assignment of responsibility for interpreting regulations, and the establishment of interdisciplinary collaboration. Our contributions include an empirically grounded conceptualization of the intersection between explainability and contestability and recommendations on implementing these principles in public institutions. We believe our contributions can inform policy-making and regulation of these core principles and enable more effective and equitable design, development, and deployment of trustworthy public AI systems.
- Explainability and Contestability for the Responsible Use of Public Sector AITimothée SchmudeIn Proceedings of the Extended Abstracts of CHI’ 25, 2025
Public institutions have begun to use AI systems in areas that directly impact people’s lives, including labor, law, health, and migration. Explainability ensures that these systems are understandable to the involved stakeholders, while its emerging counterpart contestability enables them to challenge AI decisions. Both principles support the responsible use of AI systems, but their implementation needs to take into account the needs of people without technical background, AI novices. I conduct interviews and workshops to explore how explainable AI can be made suitable for AI novices, how explanations can support their agency by allowing them to contest decisions, and how this intersection is conceptualized. My research aims to inform policy and public institutions on how to implement responsible AI by designing for explainability and contestability. The Remote Doctoral Consortium would allow me to discuss with peers how these principles can be realized and account for human factors in their design.
- Better Together? The Role of Explanations in Supporting Novices in Individual and Collective Deliberations about AITimothée Schmude, Laura Koesten, Torsten Möller, and Sebastian Tschiatschek2025
Deploying AI systems in public institutions can have far-reaching consequences for many people, making it a matter of public interest. Providing opportunities for stakeholders to come together, understand these systems, and debate their merits and harms is thus essential. Explainable AI often focuses on individuals, but deliberation benefits from group settings, which are underexplored. To address this gap, we present findings from an interview study with 8 focus groups and 12 individuals. Our findings provide insight into how explanations support AI novices in deliberating alone and in groups. Participants used modular explanations with four information categories to solve tasks and decide about an AI system’s deployment. We found that the explanations supported groups in creating shared understanding and in finding arguments for and against the system’s deployment. In comparison, individual participants engaged with explanations in more depth and performed better in the study tasks, but missed an exchange with others. Based on our findings, we provide suggestions on how explanations should be designed to work in group settings and describe their potential use in real-world contexts. With this, our contributions inform XAI research that aims to enable AI novices to understand and deliberate AI systems in the public sector.
2024
- Information that matters: Exploring information needs of people affected by algorithmic decisionsTimothée Schmude, Laura Koesten, Torsten Möller, and Sebastian TschiatschekInternational Journal of Human-Computer Studies, 2024
Every AI system that makes decisions about people has a group of stakeholders that are personally affected by these decisions. However, explanations of AI systems rarely address the information needs of this stakeholder group, who often are AI novices. This creates a gap between conveyed information and information that matters to those who are impacted by the system’s decisions, such as domain experts and decision subjects. To address this, we present the “XAI Novice Question Bank”, an extension of the XAI Question Bank (Liao et al., 2020) containing a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring. The catalog covers the categories of data, system context, system usage, and system specifications. We gathered information needs through task based interviews where participants asked questions about two AI systems to decide on their adoption and received verbal explanations in response. Our analysis showed that participants’ confidence increased after receiving explanations but that their understanding faced challenges. These included difficulties in locating information and in assessing their own understanding, as well as attempts to outsource understanding. Additionally, participants’ prior perceptions of the systems’ risks and benefits influenced their information needs. Participants who perceived high risks sought explanations about the intentions behind a system’s deployment, while those who perceived low risks rather asked about the system’s operation. Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges. We summarize our findings as five key implications that can inform the design of future explanations for lay stakeholder audiences.
- Challenging the Human-in-the-loop in Algorithmic Decision-makingSebastian Tschiatschek, Eugenia Stamboliev, Timothée Schmude, Mark Coeckelbergh, and Laura Koesten2024
We discuss the role of humans in algorithmic decision-making (ADM) for socially relevant problems from a technical and philosophical perspective. In particular, we illustrate tensions arising from diverse expectations, values, and constraints by and on the humans involved. To this end, we assume that a strategic decision-maker (SDM) introduces ADM to optimize strategic and societal goals while the algorithms’ recommended actions are overseen by a practical decision-maker (PDM) - a specific human-in-the-loop - who makes the final decisions. While the PDM is typically assumed to be a corrective, it can counteract the realization of the SDM’s desired goals and societal values not least because of a misalignment of these values and unmet information needs of the PDM. This has significant implications for the distribution of power between the stakeholders in ADM, their constraints, and information needs. In particular, we emphasize the overseeing PDM’s role as a potential political and ethical decision maker, who acts expected to balance strategic, value-driven objectives and on-the-ground individual decisions and constraints. We demonstrate empirically, on a machine learning benchmark dataset, the significant impact an overseeing PDM’s decisions can have even if the PDM is constrained to performing only a limited amount of actions differing from the algorithms’ recommendations. To ensure that the SDM’s intended values are realized, the PDM needs to be provided with appropriate information conveyed through tailored explanations and its role must be characterized clearly. Our findings emphasize the need for an in-depth discussion of …
- Spotlight Erklärbare KI: Eine Besprechung ausgewählter Use Cases aus rechtlicher und technologischer PerspektiveElisabeth Paar, Timothée Schmude, and Cansu CinarJuridikum. Zeitschrift für Kritik, Recht, Gesellschaft, 2024
Spotlight Erklärbare KI: Eine Besprechung ausgewählter Use Cases aus rechtlicher und technologischer Perspektive - University of Vienna Skip to main navigation Skip to search Skip to main content University of Vienna Home University of Vienna Logo u:cris Info Deutsch English Home Persons Units and bodies Projects Publications Activities Prizes Press/Media Search by expertise, name or affiliation Spotlight Erklärbare KI: Eine Besprechung ausgewählter Use Cases aus rechtlicher und technologischer Perspektive Elisabeth Paar, Timothée Schmude, Cansu Cinar Department of Legal and Constitutional History Research Network Data Science Publications: Contribution to journal › Article › Peer Reviewed Overview Original language German Pages (from-to) 244 Number of pages 253 Journal Juridikum. Zeitschrift für Kritik, Recht, Gesellschaft Publication status Published - 2024 Austrian Fields of Science 2012 …
2023
- On the Impact of Explanations on Understanding of Algorithmic Decision-MakingTimothée Schmude, Laura Koesten, Torsten Möller, and Sebastian TschiatschekIn Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT), Chicago, IL, USA, 2023
Ethical principles for algorithms are gaining importance as more and more stakeholders are affected by "high-risk" algorithmic decision-making (ADM) systems. Understanding how these systems work enables stakeholders to make informed decisions and to assess the systems’ adherence to ethical values. Explanations are a promising way to create understanding, but current explainable artificial intelligence (XAI) research does not always consider existent theories on how understanding is formed and evaluated. In this work, we aim to contribute to a better understanding of understanding by conducting a qualitative task-based study with 30 participants, including users and affected stakeholders. We use three explanation modalities (textual, dialogue, and interactive) to explain a "high-risk" ADM system to participants and analyse their responses both inductively and deductively, using the "six facets of understanding" framework by Wiggins & McTighe [63]. Our findings indicate that the "six facets" framework is a promising approach to analyse participants’ thought processes in understanding, providing categories for both rational and emotional understanding. We further introduce the "dialogue" modality as a valid explanation approach to increase participant engagement and interaction with the "explainer", allowing for more insight into their understanding in the process. Our analysis further suggests that individuality in understanding affects participants’ perceptions of algorithmic fairness, demonstrating the interdependence between understanding and ADM assessment that previous studies have outlined. We posit that drawing from theories on learning and understanding like the "six facets" and leveraging explanation modalities can guide XAI research to better suit explanations to learning processes of individuals and consequently enable their assessment of ethical values of ADM systems.
- Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-MakingTimothée Schmude, Laura Koesten, Torsten Möller, and Sebastian Tschiatschek2023
We argue that explanations for "algorithmic decision-making" (ADM) systems can profit by adopting practices that are already used in the learning sciences. We shortly introduce the importance of explaining ADM systems, give a brief overview of approaches drawing from other disciplines to improve explanations, and present the results of our qualitative task-based study incorporating the "six facets of understanding" framework. We close with questions guiding the discussion of how future studies can leverage an interdisciplinary approach.
- QUARE: 2nd Workshop on Measuring the Quality of Explanations in Recommender SystemsOana Inel, Nicolas Mattis, Milda Norkute, Alessandro Piscopo, Timothée Schmude, and 2 more authorsIn Proceedings of the 17th ACM Conference on Recommender Systems, Singapore, Singapore, 2023
QUARE — measuring the QUality of explAnations in REcommender systems — is the second workshop which focuses on evaluation methodologies for explanations in recommender systems. We bring together researchers and practitioners from academia and industry to facilitate discussions about the main issues and best practices in the respective areas, identify possible synergies, and outline priorities regarding future research directions. Additionally, we want to stimulate reflections around methods to systematically and holistically assess explanation approaches, impact, and goals, at the interplay between organisational and human values. To that end, this workshop aims to co-create a research agenda for evaluating the quality of explanations for recommender systems.
2022
- Program or be Programmed: Lehre Künstlicher Intelligenz in den Digital HumanitiesTimothée Schmude, and Claes NeuefeindHochschullehre zu Künstlicher Intelligenz, 2022
In unserem Beitrag stellen wir einen Ansatz zur Vermittlung von KI-bezogenen Themen in den Digital Humanities (DH) vor. Das Konzept besteht aus einer Parallelführung von theoretischem Seminar und praktischer Übung, die wir im Studienjahr 2021/2022 an der Universität zu Köln realisiert haben. Leitgedanke des Beitrags ist dabei, dass die DH als interdisziplinäres Forschungsfeld an der Schnittstelle zwischen Geisteswissenschaften und digitalen Technologien ein besonders geeignetes Umfeld bieten, um Studierende für die vielseitigen Anforderungen im Bereich der KI auszubilden.
2020
- Using Probabilistic Soft Logic to Improve Information Extraction in the Legal Domain.Birgit Kirsch, Sven Giesselbach, Timothée Schmude, Malte Völkening, Frauke Rostalski, and 1 more authorIn LWDA 2020, 2020
Extracting information from court process documents to populate a knowledge base produces data valuable to legal faculties, publishers and law firms. A challenge lies in the fact that the relevant information is interdependent and structured by numerous semantic constraints of the legal domain. Ignoring these dependencies leads to inferior solutions. Hence, the objective of this paper is to demonstrate how the extraction pipeline can be improved by the use of probabilistic soft logic rules that reflect both legal and linguistic knowledge. We propose a probabilistic rule model for the overall extraction pipeline, which enables to both map dependencies between local extraction models and to integrate additional domain knowledge in the form of logical constraints. We evaluate the performance of the model on a German court sentences corpus.