Author: Raunak Sharma, Dr. Sundeep Katevarapu, Aarzoo
Abstract
Background: The proliferation of AI-generated synthetic media has created an unprecedented challenge to information authenticity and public trust. Somoray et al. (2025) found human deepfake detection accuracy averages 55.54 percent, not above chance, in a meta-analysis of 56 studies with 86,155 participants. A 2025 scoping review documented a 704 percent increase in deepfakes during 2023. Chandra et al. (2025) revealed a 45-50 percent detection AUC drop for in-the-wild samples compared to controlled benchmarks.
Objectives: To evaluate detection system performance across controlled and in-the-wild conditions, investigate professional verification practices across twelve countries, and develop a multi-layered Trust Reconstruction Framework integrating technological, institutional, educational, and regulatory approaches.
Methods: Convergent parallel mixed-methods design combining evaluation of six detection systems across 4,800 samples spanning face-swap, lip-sync, and fully generated modalities with 36 semi-structured interviews with fact-checkers, platform professionals, and media literacy educators across twelve countries.
Results: Detection accuracy averaged 73.2 percent for controlled samples but 54.8 percent for in-the-wild (18.4 point gap, p<.001, d=3.56). The gap was largest for fully generated content (26.1 points). Four qualitative themes emerged: authenticity crisis, detection inadequacy, institutional trust dependency, and education imperative. Institutional credibility rather than technological detection was identified as the primary trust maintenance mechanism.
Conclusion: Trust reconstruction requires multi-layered interventions: content provenance technologies, institutional credibility signals, audience literacy development, and regulatory transparency requirements. The dual detection failure means neither technology nor human judgment can reliably distinguish authentic from synthetic media.
Keywords: deepfakes, synthetic media, misinformation, trust reconstruction, verification, content provenance, media credibility, AI detection, information integrity, media literacy.