Enhancing Human-AI Collaboration Through Explainable AI

Event date
18 June 2025
Event time
12:00 - 13:30
Oxford week
TT 8
Audience
Anyone
Venue
Faculty of Law - Seminar Room F
Speaker(s)

Lena Liebich, Leibniz Institute for Financial Research SAFE e.V.

We warmly welcome Lena Liebich from the Leibniz Institute for Financial Research SAFE to present her fascinating research on Explainable AI and Decision Support.

Abstract: The use of explainable AI (XAI) methods to render the prediction logic of black-box AI interpretable to humans is becoming more popular and more widely used in practice, among other things due to regulatory requirements such as the EU AI Act. Previous research on human-XAI interaction has shown that explainability may help mitigate black-box problems but also unintentionally alter individuals’ cognitive processes, e.g., distorting their reasoning and evoking informational overload. While empirical evidence on the impact of XAI on how individuals “think” is growing, it has been largely overlooked whether XAI can even affect individuals’ “thinking about thinking”, i.e., metacognition, which theory conceptualizes to monitor and control previously-studied thinking processes. Aiming to take a first step in filling this gap, we investigate whether XAI affects confidence calibrations, and, thereby, decisions to transfer decision-making responsibility to AI, on the meta-level of cognition. We conduct two incentivized experiments in which human experts repeatedly perform prediction tasks, with the option to delegate each task to an AI. We exogenously vary whether participants initially receive explanations that reveal the AI’s underlying prediction logic. We find that XAI improves individuals’ metaknowledge (the alignment between confidence and actual performance) and partially enhances confidence sensitivity (the variation of confidence with task performance). These metacognitive shifts causally increase both the frequency and effectiveness of human-to-AI delegation decisions. Interestingly, these effects only occur when explanations reveal to individuals that AI’s logic diverges from their own, leading to a systematic reduction in confidence. Our findings suggest that XAI can correct overconfidence at the potential cost of lowering confidence even when individuals perform well. Both effects influence decisions to cede responsibility to AI highlighting metacognition as a central mechanism in human–XAI collaboration.

We meet at 12pm (noon) for a 12.10pm start. A light lunch will be provided.

Found within