Abstract
Abstract
This study investigates the impact of news generation formats and source authority on trust decisions and their underlying cognitive mechanisms by designing experiments and constructing a Drift-Diffusion Model (DDM). The results indicate that trust rates, response times, and decision-making processes are all influenced by the interaction between generation formats and source authority. Specifically, compared to non-authoritative organizations, individuals exhibit lower trust rates and longer decision response times for AI-generated news from authoritative organizations. Furthermore, the evidence accumulation rate ($v$) and the starting point bias ($z$) during the decision-making process are lower in these cases. The research suggests that the blind introduction of AI technology by authoritative media may damage their credibility, leading to a trust crisis during the process of intelligent transformation.
Full Text
Preamble
The Trust Paradox of Generated Content: Cognitive Decision-Making Mechanisms Based on the Drift-Diffusion Model
Authors: $^1$, $^2$
School of Psychology, Zhejiang Normal University; Zhejiang Key Laboratory of Intelligent Education Technology and Child-Adolescent Mental Health and Crisis Intervention, Jinhua.
Abstract
This research integrates behavioral and cognitive perspectives to reveal the potential trust risks faced by authoritative news media during their intelligent transformation in the era of Artificial Intelligence (AI). By examining the underlying decision-making processes, this study provides strategic recommendations for how authoritative media can appropriately leverage AI to empower the industry and ensure a smooth transition. The findings caution that as media organizations adopt AI technologies, they must prioritize transparency and comprehensibility in technical applications. Efforts should be directed toward eliminating audience skepticism regarding emerging technologies to maintain the long-term credibility and public trust of media platforms.
Introduction
In the current era of rapid artificial intelligence development, authoritative news media are undergoing a significant intelligent transformation. However, this shift brings about a "trust paradox" regarding AI-generated content. While AI enhances efficiency and content production capabilities, it simultaneously introduces complexities in how audiences perceive and trust the information provided.
This study employs the Drift-Diffusion Model (DDM) to investigate the cognitive mechanisms underlying trust decisions. By analyzing the dynamic process of information accumulation and decision-making, we aim to understand the factors that influence whether an audience accepts or rejects content generated or mediated by AI systems.
[FIGURE:1]
Cognitive Mechanisms and the Drift-Diffusion Model
The decision to trust generated content is not instantaneous but involves a continuous process of evidence accumulation. According to the Drift-Diffusion Model, individuals gather internal and external cues until a decision threshold is reached. In the context of AI-generated news, these cues include the reputation of the media outlet, the perceived accuracy of the content, and the explicit disclosure of AI involvement.
Our research highlights that when authoritative media use AI without sufficient transparency, it can lead to "drift" toward distrust, even if the content itself is factually correct. This suggests that the perceived "black box" nature of AI algorithms creates a cognitive barrier for the audience.
[TABLE:1]
Strategies for Media Transformation
To navigate the challenges of the AI era, authoritative media must move beyond mere technical adoption and focus on the socio-cognitive dimensions of trust. Based on our findings, we propose the following strategies:
- Ensuring Technical Transparency: Media platforms should clearly label AI-generated content and provide accessible explanations of the algorithms used.
- Enhancing Interpretability: Efforts must be made to make the logic behind AI-driven editorial decisions understandable to the lay audience.
- Maintaining Institutional Credibility: The transition to intelligent systems should be framed as a tool to augment, rather than replace, the
摘要
This study investigates the impact of news generation formats and source authority on trust decision-making and its underlying cognitive mechanisms through experimental design and the construction of a Drift-Diffusion Model (DDM). The results indicate that both reaction times and the decision-making process are significantly influenced by the interaction between the news generation format and the authority of the source.
Specifically, compared to non-authoritative organizations, participants exhibited lower trust rates and longer decision reaction times when encountering AI-generated news attributed to authoritative institutions. Furthermore, the modeling results reveal that both the rate of evidence accumulation (drift rate) and the initial bias (starting point) during the decision-making process were affected. These findings suggest that the indiscriminate adoption of AI technology by authoritative media outlets may damage their institutional credibility, potentially leading to a trust crisis during the process of intelligent transformation.
关键词
Abstract
Keywords: Source Authority, Trust Decision-making, Generative AI, News, Drift-Diffusion Model (DDM)
CLC Number: 23MHCICAZD04
Note: The first two authors contributed equally to this work.
Introduction
In the contemporary digital information ecosystem, the rapid advancement of generative artificial intelligence (AI) has fundamentally transformed the production and dissemination of news. As AI-generated content becomes increasingly indistinguishable from human-authored journalism, understanding how audiences evaluate the credibility of information and make trust-based decisions has become a critical area of inquiry. This study investigates the cognitive mechanisms underlying trust decisions in the context of news consumption, specifically focusing on the influence of source authority and the distinction between human-generated and AI-generated content.
The proliferation of "synthetic" news necessitates a rigorous examination of whether traditional markers of authority—such as established media brands or institutional affiliations—continue to serve as effective heuristics for trust. Furthermore, as users encounter a mix of authentic and generated information, the psychological process of deciding whether to trust a specific piece of news becomes more complex. To address these challenges, this research utilizes the Drift-Diffusion Model (DDM) to analyze the dynamics of the decision-making process, moving beyond simple accuracy measures to capture the latent cognitive variables involved in information processing.
Theoretical Framework and Methodology
Source Authority and Trust Decisions
Source authority has long been recognized as a primary factor influencing the persuasiveness and perceived credibility of information. In the context of news, authority is often derived from the perceived expertise, trustworthiness, and status of the source. However, the rise of generative AI introduces a "black box" element to content creation, potentially disrupting the traditional relationship between source authority and audience trust. We examine how the presence of AI-related cues interacts with source metadata to shape the final trust decision.
The Drift-Diffusion Model (DDM)
To provide a more granular understanding of the decision-making process, we employ the Drift-Diffusion Model (DDM). The DDM is a prominent computational framework in cognitive psychology used to model binary choice tasks. It assumes that decisions are reached by the continuous accumulation of evidence over time until a predefined threshold is met.
[FIGURE:1]
In this study, the DDM allows us to decompose the trust decision into several key parameters:
1. Drift Rate ($v$): Represents the speed and quality of information processing. A higher drift
Paradox Authority Trust AI-Generated Content: Cognitive Decision Mechanisms Based Drift Diffusion Model Sheng Chunhua Psychology Zhejiang Normal University, Jinhua 321004, China Philosophy Social Science Laboratory Mental Health Crisis Intervention Children Adolescents,
Zhejiang Normal University, Jinhua, 321004 , China )
Abstract
study explores impact generation formats source authority trust decisions their cognitive mechanisms through experimental design construction drift diffusion model.
results
indicate trust rates, reaction times, decision processes affected interaction between generation format source authority.
Compared non-authoritative institutions, people lower trust rates, longer reaction times, lower evidence accumulation speed starting point comes AI-generated authoritative sources. research suggests blind adoption technology authoritative media damage their credibility, leading trust crisis during their transition intelligent systems. words source authority, trust decision, AI-generated, news, drift diffusion model
1 引言
Introduction
Artificial Intelligence Generated Content (AIGC) is an emerging form of production whose outputs include text, images, and videos. Research has found that while the development of this technology has greatly enhanced the user's audiovisual experience, it has also lowered the threshold for generating disinformation, presenting the public with severe cognitive challenges when discerning the authenticity of content \cite{Nightingale}. Consequently, individuals exhibit two diametrically opposite attitudes toward AIGC: "automation bias" and "algorithm aversion." The former refers to a tendency to overestimate the consistency and accuracy of technology, leading to a preference for its generated content; the latter refers to a rejection of AI due to its perceived dehumanized characteristics, such as a lack of understanding and agency. In the field of news communication, media outlets have begun utilizing AIGC technology to provide the public with higher-quality news content. Against this backdrop, exploring the different impacts of AIGC on the credibility of news media and individual trust-based decision-making mechanisms is of great significance for addressing the trust risks arising from technological innovation.
The authority of news information sources primarily relies on high public trust in media institutions and the inherent reliability of their information delivery \cite{Moran}. News media authority is not naturally occurring but is constructed through the interactions of journalists, politicians, and experts; this authority serves as a cornerstone for building news credibility \cite{Anstead}. Regarding the construction of trust in news authority, research indicates that in the digital environment, the authority of a platform has become a key trust indicator \cite{Tandoc}. Traditional news organizations have established stable trust relationships with the public by training professionals and strictly controlling the news production process, effectively suppressing the generation of fake news \cite{Robinson}. Today, however, non-authoritative media—represented by self-media—often neglect news authenticity in pursuit of traffic, leading to the proliferation of fake news and causing a significant impact on their authority and credibility.
This study posits that when individuals process news content generated by media with different levels of authority, they may activate opposing cognitive expectations. Specifically, there is an interactive effect between the news generation method and source authority on the audience's trust decisions. For authoritative media, which have already fostered high trust expectations, the introduction of AIGC technology may be perceived as a disruption of information certainty and established cognitive habits. This leads to lower trust in AIGC-labeled content and longer decision reaction times. Conversely, for non-authoritative sources like self-media—characterized by weak platform supervision and lack of guaranteed news quality—audiences may view the introduction of AIGC technology as a means of ensuring authenticity and objectivity, aligning more closely with the core demands of news (Zheng Zhihang). This leads to an "automation bias," resulting in higher trust in AI-generated news and shorter decision reaction times. Previous discussions on decision-making cognitive mechanisms have typically been based on Dual-Process Theory. This theory suggests that the human brain possesses two information processing systems: first, an automatic and unconscious heuristic system that relies on an individual's past knowledge and beliefs to make rapid judgments \cite{Tversky, Kahneman}; and second, a slow, conscious systematic system that requires more cognitive resources and follows logical rules and deliberate reflection \cite{Evans}.
In the intelligent era, researchers have begun to focus on the impact of AI on individual decision-making cognitive mechanisms in human-machine collaborative contexts. For instance, studies have confirmed that to save cognitive effort, people tend to adopt heuristic processing when judging information accuracy \cite{Hause}. Furthermore, combining decision-making process data with deep neural networks has revealed the role of systematic processing in modeling decision processes \cite{Nikitin}. To further quantify the differences in the cognitive mechanisms of news trust decisions, this study employs the Drift-Diffusion Model (DDM) to model and analyze decision data. This model assumes that during the decision-making process, the amount of accumulated evidence changes dynamically over time until it reaches a preset decision threshold, at which point a corresponding decision is made. The DDM consists of four core parameters: drift rate ($v$), starting point bias ($z$), decision boundary ($a$), and non-decision time ($t_0$).
In the context of this study's decision task, the information accumulation process begins when an individual reads the news. The drift rate ($v$) represents the rate of evidence accumulation for a particular choice, quantifying the subject's integration of evidence for trust or distrust. The starting point ($z$) represents the prior bias before the decision, quantifying the subject's inclination to choose trust or distrust before obtaining evidence. The boundary ($a$) represents the amount of information accumulated before a response is made; the upper boundary represents trust, while the lower boundary represents distrust. Non-decision time ($t_0$) reflects other factors affecting decision reaction time, such as information encoding and motor response. Based on Dual-Process Theory and the DDM, this study argues that the audience's trust decision process integrates the dual characteristics of heuristic and systematic processing, and that the news generation method and source authority interact to influence the audience's cognitive processing mode.
When facing AIGC news published by authoritative institutions, the audience's reliance on professional journalism and rigorous institutional oversight may trigger "algorithm aversion" due to the "non-human" traits of AI. This leads to systematic processing during decision-making, where individuals evaluate content authenticity with a more cautious attitude, resulting in a relatively slower information accumulation rate and a decrease in starting point bias and drift rate. For news published by non-authoritative institutions, the audience's inherent lack of trust in social media platforms may lead them to believe that the objectivity of AIGC technology is superior to that of unreliable human creators. This prompts an "automation bias," leading individuals to use heuristic processing to make rough, rapid, and trust-biased judgments, resulting in an increase in decision bias and drift rate. Since the experiment strictly controlled for factors such as button response and the length of reading materials, non-decision time and decision boundaries are treated as constant parameters and are not included in the primary analysis \cite{Mormann, Russo}.
2.1 被试与设计
Methodology
Experimental Design
This study employed a mixed experimental design to investigate the impact of news generation methods and source authority. The news generation method served as the between-subjects variable, while source authority functioned as the within-subjects variable.
Participants and Data Cleaning
Data collection was conducted through an online platform. To ensure data quality, we implemented rigorous attention checks and excluded invalid responses. After these filtering processes, the final sample consisted of 232 valid participants (38.8% male and 61.2% female), with a mean age of $M = 21.81$ years.
Statistical Power Analysis
Following the experimental tasks, all participants completed the required measurements. A sensitivity analysis conducted via G*Power indicated that, with an alpha level of $\alpha = 0.05$ (two-tailed) and a statistical power of $1 - \beta = 0.80$, the current sample size is sufficient to detect a small-to-medium effect size of $f = 0.18$.
2.2 研究材料和程序
The news materials for this study were collected from various social media platforms and categorized into four distinct types: authoritative media news (e.g., official outlets such as Xinhua Net), self-media news (e.g., Douyin self-media, unverified WeChat public accounts, etc.), AI-generated content (AIGC), and human-written news. The content of these news items spans diverse topics, including science, education, and health. To ensure rigorous experimental control, each news excerpt was limited to a specific length. The experimental procedure was developed using the PsychoPy platform.
Participants were required to complete a news trust decision-making task. Both groups of participants read news sourced from either authoritative or self-media platforms. The critical difference between the groups was that one group viewed content exclusively written by humans, while the other group viewed news content generated by other means.
In each trial, information regarding the source authority and the generation method of the news was clearly indicated via labels positioned directly below the content. After reading each news item, participants were instructed to make a rapid key-press decision regarding the perceived authenticity of the content.
2.3 数据分析处理
A repeated measures Analysis of Variance (ANOVA) was conducted on trust rates and reaction times across different language conditions. Furthermore, this study utilized the Hierarchical Diffusion Model (HDM) and Hierarchical Bayesian Parameter Estimation via Python to analyze the data.
To evaluate the effects of the experimental manipulations, we calculated the differences between the posterior distributions of parameters across various experimental conditions. We then examined the Highest Density Interval (HDI) of these difference distributions. Following the criteria established by Johnson \cite{Johnson_Reference}, a significant difference between two conditions for a given parameter is identified if the 95% HDI of the difference distribution does not overlap with zero.
3.1 信任率分析结果
In the repeated measures analysis of variance (ANOVA) with trust rate as the dependent variable, the main effect of news creation method was not significant ($F(1, 215) = 1.34$, $p = .248$, $\eta_p^2 = 0.006$). However, the main effect of source authority was significant ($F(1, 215) = 114.71$, $p < 0.001$, $\eta_p^2 = 0.348$), with participants demonstrating a significantly higher trust rate in the authoritative condition ($M = 0.82$, $SD = 0.22$) than in the non-authoritative condition ($M = 0.53$, $SD = 0.31$).
Furthermore, the interaction effect between news creation method and source authority was significant ($F(1, 215) = 4.35$, $p = 0.039$, $\eta_p^2 = 0.020$). Subsequent simple effects analysis revealed that in the non-authoritative condition, there was no significant difference ($t(215) = 0.726$, $p = 0.470$) between the trust rate for AI-generated news ($M = 0.54$, $SD = 0.35$) and human-written news ($M = 0.52$, $SD = 0.36$). Conversely, in the authoritative condition, the trust rate for the AI-generated group ($M = 0.77$, $SD = 0.28$) was significantly lower than that of the human-written group ($M = 0.85$, $SD = 0.21$; $t(215) = -2.32$, $p = 0.022$).
3.2 反应时分析结果
In the repeated measures analysis of variance (ANOVA) with reaction time as the dependent variable, the main effect of the news creation method was not significant, $F(1, 114) = 1.06, p = .305$. However, the main effect of source authority was significant, $F(1, 114) = 11.62, p < .001, \eta_p^2 = 0.09$, indicating that participants' trust rates for news were lower under the authoritative condition ($M = 7.20, SD = 8.06$) than under the non-authoritative condition. Furthermore, the interaction between the news creation method and source authority was significant, $F(1, 114) = 12.14, p < .001, \eta_p^2 = 0.10$. Simple effects analysis revealed that under the authoritative condition, the reaction time for the AI-generated group ($M = 7.98, SD = 2.315$) was significantly longer than that of the human-written group ($M = 6.52, SD = 2.315, p = 0.022$). Conversely, under the non-authoritative condition, there was no significant difference between the AI-generated group ($M = 8.17, SD = 0.339$) and the human-written group ($M = 7.96, SD = 0.735, p = 0.735$).
The Bayes factor was $BF_{10} = 0.21$, as illustrated in [FIGURE:3].
The influence of news creation methods and source authority on decision-making reaction times.
3.3 漂移扩散模型结果
To further investigate the influence of news generation methods and source authority on decision-making cognitive mechanisms, we conducted a hierarchical Bayesian regression analysis on the drift rate ($v$) and the starting point bias ($z$).
The results of the drift rate analysis indicate that authoritative sources significantly accelerate the audience's evidence accumulation rate ($\beta = 0.488, p < 0.05$). In contrast, the news generation method did not exhibit a significant main effect ($\beta = 0.025, p > 0.05$), and the interaction between the two was also observed ($\beta = -0.195, p < 0.05$). This suggests that the impact of news creation methods on the audience is moderated by source authority. Under the condition of non-authoritative sources, the news generation method had little impact on the drift rate. However, under the condition of authoritative sources, the generated news content significantly influenced the process of evidence accumulation, with the drift rate shifting from the baseline. Here, the label variable indicates the news creation method, while authority indicates source authority. Regarding the analysis of prior preferences before decision-making, the regression results showed a significant positive main effect for source authority ($\beta = 0.047, 95\% \text{ CI } [0.023, 0.073]$), whereas the main effect of the news generation method was not significant ($\beta = 0.021, 95\% \text{ CI } [-0.003, 0.044]$). However, a significant interaction effect was found between the two ($\beta = -0.041, 95\% \text{ CI } [-0.072, -0.010]$). This further demonstrates that the influence of the news creation method on the audience is moderated by source authority. Specifically, under non-authoritative source conditions, news generated by AI caused the starting point bias to shift slightly toward the decision threshold; however, under authoritative conditions, the effect changed.
Under authoritative source conditions, the generated news content significantly weakened the prior trust bias typically induced by an authoritative source. The encoding for the label and authority variables remains consistent with the aforementioned definitions.
4 讨论
The news generation method and audience trust judgments reveal underlying cognitive mechanisms through the Drift Diffusion Model (DDM). This study found that the interaction between news generation methods and source authority significantly influences trust rates and reaction times in individual decision-making, partially supporting the research hypotheses. When faced with news content published by non-authoritative organizations, the introduction of AI has a positive but non-significant impact on trust rates and reaction times. This may be attributed to the proliferation of non-authoritative organizations, represented by self-media, which often pursue traffic at the expense of journalistic standards, leading to frequent instances of news malpractice and over-entertainment. Consequently, the public maintains a vigilant and skeptical attitude toward the authenticity and accuracy of content produced by these entities (Newman, 2018). Even with the introduction of AI technology, which is perceived as more objective and precise than humans, the public's doubts regarding the professionalism of these platforms remain. This reveals that in the era of artificial intelligence, the focus of credibility-building for non-authoritative organizations should remain on the accumulation of professional expertise and the rigorous supervision of the content production process, rather than solely on technological innovation. Conversely, when faced with news content from authoritative organizations, AI-generated news led to significantly increased reaction times compared to human-written content. The involvement of AI appears to weaken the trust advantage of authoritative sources. This may occur because individuals, perceiving the "black box" characteristics of AI, feel compelled to invest more cognitive effort and adopt a more cautious stance, gathering more information before making a decision, which in turn extends decision-making time. This confirms that even as technophilia becomes increasingly prevalent, maintaining news credibility requires platforms to uphold a high degree of responsibility and transparency in their technological applications (Zhang Kunpeng, 2021). Furthermore, there is a significant interaction between news generation methods and source authority regarding the rate of evidence accumulation, strongly supporting the research hypotheses. For non-authoritative organizations, AI exerts a positive effect on information processing speed and initial bias in trust decisions, triggering heuristic processing where individuals perform rapid, superficial evaluations. This stems from an "automation bias" (Jones et al., 2019) rooted in trust toward the powerful information-processing capabilities of AI systems and their underlying rigorous computational programming. However, when dealing with authoritative news agencies, AI triggers a shift in the audience's perception of the news.
This shift leads to systematic processing characterized by deep reflection, where individuals collect more information and evidence to judge the authenticity of news content to mitigate potential risks brought about by algorithmic uncertainty. These results indicate that in the field of news communication, the intervention of AI technology is not always positive. In the context of authoritative sources, AI-generated news content activates a latent cognitive transition, prompting the audience to switch from default heuristic processing to cautious systematic processing. This transition reduces decision-making efficiency and initial trust, thereby posing a potential threat to the credibility of authoritative media (Evans, 2008). From both the behavioral and cognitive levels, these findings collectively reveal the "efficiency-credibility paradox" that authoritative media may face when undergoing digital transformation in the AI era: introducing AI to improve efficiency may instead damage their established authority. These findings serve as a warning to authoritative media that they must apply AI in a more transparent and responsible manner to maintain their hard-earned public trust.
This study has several limitations. First, the stimulus materials were limited to pure text, whereas real-world news often involves multimodal content such as images, graphics, and videos. Future research should expand the scope to multimodal contexts to examine the impact on audience trust under more complex presentation formats. Second, while this study focused on revealing the potential risks of AI intervention, future research should shift toward exploring constructive solutions. For instance, researchers could investigate how to enhance transparency in the application of AI technology, develop AI auxiliary tools designed to strengthen rather than weaken content credibility, and supplement these with effective communication strategies.
5 结论
Trust rates and reaction times are both influenced by the interaction between source authority and the method of content generation. Compared to conditions involving non-authoritative sources, individuals in authoritative contexts exhibit lower levels of trust and increased reaction times toward AI-generated content. Conversely, this effect is not observed in the context of non-authoritative sources.
An analysis of the decision-making process reveals that in authoritative source scenarios, AI-generated content significantly reduces both the evidence accumulation rate and the starting point bias ($\beta$). In contrast, in non-authoritative source scenarios, AI-generated content has a relatively minor impact on the decision-making process.
参考文献
Consciousness Awakening and Rule Imagination: Tactical Foundations and Practical Paths of User Resistance to Algorithms (2022). Journal of Communication Research (08): 56+126. Escaping the Cycle of Pan-Entertainment (2019). (02): 18-20.
Algorithm Aversion in the Era of Artificial Intelligence: Research Framework and Future Prospects (2023). (10): (2022). Analysis of the Production and Dissemination of Citizen Journalism in the New Media Era (2025).
Practical Paths and Value Pursuits of Data Journalism from the Perspective of Journalistic Authority (2023). An Important Measure for Social Collaborative Governance in the Context of Rising Generative Content Production: The Importance and Necessity of Labeling. Research on News Credibility in the Context of Algorithmic Recommendation (2020).
Applications of Computational Models in Moral Cognition Research: Advances in Psychological Science. Ethical Crises and Legal Regulation of Artificial Intelligence Algorithms (2021). Journal of Northwest University of Political Science and Law (01): 14. Anstead and Chadwick (2018). The primary definer online: The construction and propagation of think tank authority on social media.
Media Culture Society Evans, (2011). Dual-Process Theories Reasoning:
Contemporary Issues Developmental Applications. Developmental Review Hause Pennycook Gordon David (2022).
Thinking thinking differently? Using drift-diffusion modeling illuminate accuracy prompts decrease misinformation
sharing. Cognition , 105312 - 105312.
Johnson, Hopwood, Cesario, Pleskac, (2017). Advancing research cognitive processes social personality psychology: hierarchical drift diffusion model primer.
Social Psychological Personality Science Jones-Jang, Park, (2023). people react failure?
Automation bias, algorithmic aversion, perceived controllability.
Journal Computer-Mediated
Communication , 28 (1), 29-30 .
(2018). Understanding perception algorithmic decisions:
Fairness, trust,
emotion response algorithmic management. Society, Moran, Nechushtai, Before Reception:
Trust Infrastructure. (2022). Journalism Mormann, Russo, (2021).
Attention Increase Value Choice Alternatives?
Trends in Cognitive Sciences , 25 (4), 305 - 315.
Newman, Fletcher, Robertson, Eddy, Nielsen, Reuters Institute Digital Report Oxford:
Reuters Institute Study Journalism Nightingale, Farid, (2022).
AI-synthesized faces indistinguishable faces trustworthy.
Proceedings National Academy Sciences United States America.
Nikitin, Kaski, (2021). Decision elicitation domain adaptation.
Proceedings International Conference Intelligent Interfaces Tandoc, Hellmueller, (2018). roles gatekeepers digital credibility.
Journalism Tversky, Kahneman, (1974). Judgment Under Uncertainty:
Heuristics Biases: Biases Judgments Reveal Heuristics Thinking Under Uncertainty.
Science (4157),