As artificial intelligence (AI) rapidly integrates into media production, the credibility of AI-generated videos and the subsequent user trust have become critical concerns. A recent study published in Frontiers in Psychology delves into the intricate psychological pathways through which visual anomalies, termed "AI hallucinations," influence viewer perception and behavioral intentions, offering a nuanced understanding of this evolving digital landscape. The research, grounded in the Stimulus-Organism-Response (S-O-R) framework, reveals that the perceived realism of AI-generated content plays a more dominant role in shaping trust than transient emotional discomfort, a finding with significant implications for developers and content creators. The Rise of AI-Generated Video and the Trust Dilemma The proliferation of generative AI has revolutionized media production, dramatically increasing efficiency and lowering barriers to entry for content creation. AI-generated videos, in particular, are becoming increasingly sophisticated and realistic, promising to reshape visual storytelling and digital communication. However, this technological advancement has not automatically translated into universal user acceptance. Instead, a significant "trust dilemma" has emerged, where increasingly realistic AI-generated videos can paradoxically lead to diminished user trust due to subtle, yet perceptible, visual anomalies. These anomalies, referred to as "AI hallucinations," encompass a range of imperfections such as distorted facial structures, discontinuous movements, and illogical scene constructions. While often imperceptible to untrained eyes, these deviations from naturalistic representation can trigger negative psychological responses, including feelings of eeriness and cognitive dissonance. This study sought to systematically investigate how these hallucinations impact user perception, moving beyond simply identifying their presence to understanding the psychological mechanisms that mediate their effect on trust and subsequent behavior. Unpacking the S-O-R Framework in the Context of AI Video The research employed the Stimulus-Organism-Response (S-O-R) model, a well-established framework in environmental psychology, to dissect the user experience with AI-generated videos. In this context: Stimulus (S): The AI hallucination intensity within the generated videos. This was manipulated at three levels: low, medium, and high. Organism (O): The internal psychological processing of the viewer, encompassing emotional responses (uncanny valley eeriness), cognitive evaluations (perceived realism), and judgments of credibility (perceived trust). Response (R): The behavioral intention of the user, such as willingness to continue watching, use, or recommend the AI-generated video. The study collected data from 408 participants who viewed AI-generated videos with varying levels of hallucination. These participants then rated their perceptions of the videos across several dimensions. Key Findings: Realism Reigns Supreme in Trust Formation The analysis, utilizing Partial Least Squares Structural Equation Modeling (PLS-SEM) and Analysis of Variance (ANOVA), yielded several compelling results: AI Hallucinations Undermine Realism and Evoke Eeriness: As predicted, higher levels of AI hallucinations significantly increased uncanny valley eeriness and, conversely, reduced perceived realism. This aligns with the understanding that subtle visual distortions disrupt the sense of naturalness, leading to discomfort. Perceived Realism is the Strongest Predictor of Trust: The study found that perceived realism had the most substantial positive effect on perceived trust. This suggests that for AI-generated videos, the cognitive assessment of how closely the content aligns with real-world logic and appearance is paramount for building credibility. Trust as the Central Mediator: Both uncanny valley eeriness and perceived realism influenced behavioral intention indirectly through perceived trust. Perceived trust, in turn, was a strong positive predictor of behavioral intention. This highlights trust as the critical psychological bridge connecting internal processing to outward behavior. Differential Impact of Hallucination Levels: ANOVA results confirmed significant differences in participants’ responses across the three hallucination levels. As hallucination intensity increased, perceived realism and trust consistently declined, while uncanny valley eeriness and negative behavioral intentions rose. Notably, the effect on perceived realism was particularly pronounced, underscoring its sensitivity to visual anomalies. The study’s path analysis revealed that AI hallucinations negatively affect behavioral intention through two primary S-O-R pathways: first, by eliciting uncanny valley eeriness which, in turn, reduces perceived trust; and second, by diminishing perceived realism, which subsequently lowers perceived trust. Both pathways ultimately converge on behavioral intention. The direct effect of perceived realism on trust was found to be considerably stronger than the indirect effect of uncanny valley eeriness on trust, solidifying the cognitive aspect of realism as the dominant factor in trust formation. Broader Implications for the AI Content Landscape The findings of this research carry significant implications for the future development and deployment of AI-generated video content: Theoretical Contributions: The study validates and extends the S-O-R model’s applicability to the realm of AI-generated visual media. It provides empirical evidence for the psychological transmission mechanisms linking visual distortions to user trust and behavior, offering a novel theoretical perspective on credibility construction in generative visual content. By applying uncanny valley theory to dynamic video content, the research deepens our understanding of how subtle imperfections can lead to significant psychological responses. Practical Guidance for Developers and Creators: The research offers actionable insights for the AI industry: Prioritize Perceived Realism: Developers should focus on improving the underlying logical coherence and visual fidelity of AI models to enhance perceived realism. This is more crucial for building trust than solely addressing superficial aesthetic qualities. Addressing structural distortions and semantic inconsistencies should be a top priority. Understand Trust Thresholds: The study suggests that users may have a certain tolerance for minor visual flaws. However, beyond specific thresholds, trust and behavioral intentions can collapse rapidly. Identifying and maintaining these thresholds in AI-generated content is vital for widespread adoption. Holistic Content Evaluation: Evaluating AI-generated video quality should go beyond technical metrics. Incorporating user-centric psychological assessments, such as sensitivity to uncanny valley effects and realism tolerance, is essential for producing content that is not only visually appealing but also psychologically credible. Addressing the "Trust Black Box": By illuminating the psychological pathways through which AI hallucinations erode trust, this study helps demystify the "trust black box" associated with AI-generated content. It moves beyond simply acknowledging the problem to offering a data-driven explanation of why users develop trust or distrust in AI-generated videos. Limitations and Future Directions While this study provides robust insights, certain limitations warrant consideration for future research. The sample was predominantly composed of young individuals with backgrounds in design and visual media. While this demographic is highly relevant as early adopters and future professionals in the field, their heightened sensitivity to visual anomalies may limit the generalizability of findings to broader, less visually-attuned populations. Future research could benefit from including more diverse age groups and cross-cultural samples to explore external validity. Furthermore, the study focused on specific psychological variables. Expanding the scope to include factors like perceived risk, AI literacy, and cognitive load could provide a more comprehensive model of user decision-making in AI-generated content contexts. Additionally, exploring the impact of different generative AI models and employing more advanced statistical techniques like multilevel SEM could offer further refinement. Conclusion As AI-generated videos become increasingly ubiquitous, understanding the psychological underpinnings of user trust is paramount. This research, utilizing the S-O-R framework, provides compelling evidence that AI hallucinations, by diminishing perceived realism and evoking uncanny valley eeriness, significantly impact user trust and subsequent behavioral intentions. The findings underscore the critical role of cognitive evaluation, specifically perceived realism, in shaping trust, offering a clear roadmap for developers to create AI-generated content that is not only technically impressive but also psychologically credible and trustworthy. By prioritizing logical coherence and visual naturalness, the AI industry can navigate the trust dilemma and pave the way for the responsible and widespread adoption of AI-powered video content. Post navigation Psychometric Properties of the Chinese Sport Emotion Questionnaire (SEQ-C) Among Table Tennis Athletes in Shandong, China