Spotting fakes: How do non-experts approach deepfake video detection?

Holmes, Mary, Somoray, Klaire, Connor, Jonathan D., Goodall, Darcy W., Beaumont, Lynsey, Bugeja, Jordan, Eljed, Isabelle E., Ng, Sarah Sai Wan, Ede, Ryan, and Miller, Dan J. (2025) Spotting fakes: How do non-experts approach deepfake video detection? Studies in Communication and Media, 14 (4). pp. 550-569.

[img]
Preview
PDF (Published Version) - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (686kB) | Preview
View at Publisher Website: https://doi.org/10.5771/2192-4007-2025-4...


Abstract

Intervening to bolster human detection of deepfakes has proven difficult. Little is known about the behavioural strategies people employ when attempting to detect deepfakes. This paper reports two studies in which non-experts completed a deepfake detection task. As part of the task, participants were presented with a series of short videos – half of which were deepfakes – and asked to categorise each video as either deepfake or authentic. In Study 1 (N = 391), an online study, participants were randomly assigned to a control or intervention group (in which they received a list of detection strategies before the detection task). After the detection task, participants elaborated on the approach they employed during the task. In Study 2 (N = 32), a laboratory-based study, participants’ gaze behaviour (fixations and saccades) was recorded during the detection task. No detection strategies were provided to Study 2 participants. Consistent with prior research, Study 1 participants showed modest detection accuracy (M = .61, SD = .14) – only somewhat above chance levels (.50) – with no difference between the intervention and control groups. However, content analysis of participants’ self-reports revealed that the intervention successfully shifted participants’ attention toward cues such as skin texture and facial movements, while the control group more frequently reported relying on intuition (gut feeling) and features such as body language. Study 2 found similar levels of detection accuracy (M = .65, SD = .20). Participants focused their gaze primarily on the eyes and mouth rather than the body, showing a slight preference for the eyes over the mouth. No differences in gaze were found between authentic and deepfake videos or between correctly and incorrectly categorised videos. The findings suggest interventions can modify detection behaviours (even without improving accuracy). Future interventions may benefit from directing attention from the eyes toward more diagnostic features, such as face–body inconsistencies and the face boundary.

Item ID: 90271
Item Type: Article (Research - C1)
ISSN: 2192-4007
Keywords: deepfakes, ai-generated media, synthetic media, detection, self-report, eye-tracking
Copyright Information: CC BY-NC-ND
Date Deposited: 27 Jan 2026 03:28
More Statistics

Actions (Repository Staff Only)

Item Control Page Item Control Page