Abstract
GenAI participation in organizational decision-making has become an unstoppable trend, yet academic research on GenAI advice adoption remains incomplete. In the leadership decision-making process, how do leaders compare employee advice, GenAI advice, and employee-GenAI team advice for adoption? What differences exist in leader perceptions? To explore this issue, this project adopts a social comparison theory perspective and, through five sub-project studies, primarily addresses the following five questions: 1) How should GenAI advice adoption be defined? 2) What differences exist in leaders' adoption of employee-GenAI advice? 3) What differences exist in leaders' adoption of employee-GenAI team advice? 4) How can the differential effects of employee-GenAI advice adoption be compared? 5) What differences exist in the barriers and intervention mechanisms for employee-GenAI advice adoption? Ultimately, this study systematically constructs a multi-level model of leader adoption of employee-GenAI advice, providing a new perspective for interdisciplinary theoretical development and offering practical guidance for organizations to optimize human-AI collaborative decision-making and reduce technological risks.
Full Text
Preamble
The Differential Perception of Leaders' Response to Employee-GenAI Advice: A Multi-Level Study Based on Social Comparison Theory
HAN Yi¹, MA Zhaoyi², ZONG Shuwei³
(1 School of Business Administration, Zhongnan University of Economics and Law, Wuhan 430073, China)
(2 School of Business Administration, Zhongnan University of Economics and Law, Wuhan 430073, China)
(3 School of Business Administration, Southwestern University of Finance and Economics, Chengdu 611130, China)
Abstract
The integration of Generative AI (GenAI) into organizational decision-making has become an irreversible trend, yet academic research on GenAI advice adoption remains incomplete. In leadership decision-making processes, how do leaders compare and adopt employee advice, GenAI advice, and employee-GenAI team advice? What perceptual differences exist among leaders? To address these questions, this project adopts a social comparison theory perspective and conducts five sub-studies to resolve the following issues: (1) How should GenAI advice adoption be conceptualized? (2) What differences exist in leaders' adoption of employee versus GenAI advice? (3) How do leaders differentially adopt advice from employee-GenAI teams? (4) How can we compare the differential effects of employee-GenAI advice adoption? (5) What distinct barriers and intervention mechanisms characterize employee-GenAI advice adoption? Ultimately, this study systematically constructs a multi-level model of leader adoption of employee-GenAI advice, offering new theoretical perspectives for interdisciplinary development and providing practical guidance for organizations to optimize human-AI collaborative decision-making and mitigate technological risks.
Keywords: leadership, GenAI advice, advice response, social comparison theory, perceptual differences
Classification Code: B849: C93
1. Problem Statement
With the rapid development of Generative AI (GenAI) exemplified by ChatGPT, its impact on organizations has become increasingly significant (Liu & Tan, 2025; Rizzo et al., 2024). In workplace settings, GenAI is being introduced into decision-making contexts that previously relied entirely on human judgment (Kahr et al., 2024), profoundly influencing leadership decisions and contributing to management decision optimization through advice based on large datasets and machine learning (Mahmud et al., 2023; Xu et al., 2025).
GenAI has opened new possibilities across numerous organizational domains, including clinical care (Duong & Solomon, 2023; Jeblick et al., 2024), education (Kasneci et al., 2023), arts and music (Civit et al., 2022; Oksanen et al., 2023), and design (Jiang et al., 2023). Unlike traditional AI that relies on historical data for prediction, GenAI generates novel content based on massive pre-trained datasets (Feuerriegel et al., 2024) and optimizes its algorithms through human feedback. This capability enables it to synthesize context-relevant novel content across multiple modalities—including text, images, audio, programming code, simulations, and video (Tilton et al., 2023)—which can positively impact leadership decision-making (Huang et al., 2024).
Introducing GenAI technology into the advice domain to promote leader adoption of GenAI or "employee-GenAI" hybrid advice requires clarifying the underlying facilitating and inhibiting mechanisms. Specifically, how leaders evaluate or compare employee advice, GenAI advice, and employee-GenAI team advice to make adoption decisions represents a question of significant research value. This can be decomposed into five key issues:
First, how should GenAI advice adoption be defined? Current academic consensus remains elusive. Some scholars simply define it as individuals using GenAI products (Agraw et al., 2024), while others view it as a decision-making process involving evaluation and selective acceptance (Zong et al., 2025). Although these divergent definitions each hold value, conceptual inconsistency not only hinders deep exploration of GenAI advice adoption's core内涵 but also impedes subsequent research progress. Addressing this gap, this study focuses on the core context of leader decision-making in employee-GenAI scenarios and, integrating social comparison theory, defines GenAI advice adoption as: a dynamic process in which leaders, after receiving advice from GenAI (or employee-GenAI collaborative teams), engage in multi-dimensional comparisons with employee advice, form cognitive perceptions and emotional responses, and ultimately integrate the advice content into organizational decision-making plans or directly implement its core measures. This definition emphasizes the "evaluate first, then select" dynamic process, aligning with leaders' realistic decision-making logic of weighing, reacting, and implementing in human-AI collaboration.
Second, what differences exist in leaders' adoption of employee versus GenAI advice? Academic opinion presents a polarized view. Some scholars argue that GenAI advice, suffering from insufficient explainability, accuracy controversies, and leaders' trust deficits, may induce numerous barriers, causing leaders to resist GenAI advice and trust employee advice instead (Mahmud et al., 2023). Others contend that even when neither human nor AI advice is perfect, their combination can enhance decision effectiveness (Sachin & Schecter, 2024; Choudhary et al., 2025). Additionally, adopting GenAI advice in workplaces helps reduce mechanical, repetitive work, thereby enhancing leaders' decision-making efficiency (You et al., 2022). Notably, the influence mechanisms of GenAI advice adoption in organizations may differ from traditional leader advice-taking, depending on leaders' differential perceptions of employee versus GenAI advice, including comparisons of advice sources/targets, dimensional comparisons, and motivational comparisons (Matthews & Kelemen, 2025). Therefore, exploring leaders' differential perceptions of employee and GenAI advice forms the logical foundation for understanding their trade-offs.
Third, what mechanisms underlie leaders' adoption of employee-GenAI team advice? As team composition evolves toward intelligence, leaders face new challenges when absorbing team advice. Traditional employee teams achieve consensus through social interaction and knowledge sharing (Bonaccio & Dalal, 2006), whereas GenAI teams generate synthesized recommendations through algorithmic integration and multi-modal optimization (Ikeda, 2024). These fundamentally different generation mechanisms can be understood across three dimensions: First, the paradox of diversity versus consistency in team advice. Employee team advice naturally possesses heterogeneity, where cognitive differences among members may spawn idea collisions or decision conflicts. This "pseudo-diversity" may cause leaders to underestimate the innovative potential of GenAI team advice or overestimate its executability due to its consistency (Böhm et al., 2023). Second, the cognitive tension between social identity and algorithmic authority. When evaluating employee team advice, leaders are influenced by social-emotional cues, member status, and prestige (Bonaccio & Dalal, 2006). GenAI teams lack social entity attributes, so evaluation relies more on technical performance metrics and system transparency (Böhm et al., 2023). Even when GenAI team advice is superior, leaders may reject it due to "algorithm aversion" (especially in ambiguous decisions) (Dietvorst et al., 2018), though understanding algorithmic principles may transform this into "AI appreciation" (Qin et al., 2025), reflecting deep conflict between technical and social rationality. Third, the asymmetry of traceability and accountability. Employee team advice processes can be attributed through review records and responsibility allocation mechanisms (Mesmer-Magnus & DeChurch, 2009), whereas GenAI teams' "black box" characteristics obscure decision pathways. Even with explainable AI techniques, their decision transparency cannot match human processes (Lundberg & Lee, 2017). This may create an "algorithmic agency paradox"—difficulty tracing nodes for accountability when advice errs, yet inability to establish lasting trust when correct due to lack of clear contributors. Based on these considerations, this study investigates leaders' differential perceptions across dimensions of advice quality, risk attributes, and social value for different team advice types.
Fourth, how do adoption effects differ between employee and GenAI advice? Existing research indicates GenAI differs significantly from humans in providing advice, primarily in algorithmic precision, scalability, and personalization capabilities (Baines et al., 2024; Kuosmanen, 2024). From the advisor's perspective, GenAI advice's transparency and explainability are key factors influencing leader trust. While enhanced transparency helps build trust (Baines et al., 2024), its effect is constrained by advice accuracy (Schmitt et al., 2021). From the decision-maker's perspective, leaders are more inclined to adopt GenAI advice when it aligns with personal expectations (Mesbah et al., 2021). From task-fit perspective, GenAI is often less favored than human advice in tasks requiring nuanced judgment (e.g., demand forecasting), though its potential in emotional tasks is emerging as AI emotional understanding capabilities advance (Daschner & Obermaier, 2022). Therefore, GenAI and employee advice effectiveness depends on multiple dimensions including trust, transparency, and task fit, yet existing research has not systematically explored differential effects in leader adoption of employee-GenAI advice (Sturm et al., 2023). Thus, this study poses: How do leaders judge effectiveness differences in employee-GenAI advice adoption? And how can we enhance the effectiveness of employee-GenAI advice adoption?
Fifth, what risk barriers characterize employee-GenAI advice adoption, and how can we intervene? Research shows trust is a core factor in leader adoption of GenAI advice, particularly critical in low-risk contexts; in high-risk decisions, leaders more cautiously evaluate advice accuracy and credibility (Baines et al., 2024). Meanwhile, perceived accuracy and trust form complementary relationships shaping overall attitudes toward GenAI (Williams, 2020). Even when leaders hold positive attitudes toward GenAI, high-risk situations still require integrated trust and accuracy judgments (Chua et al., 2023). In summary, leader adoption of GenAI advice is influenced by trust, perceived accuracy, and contextual risk, yet specific adoption mechanisms across contexts remain unclear. Therefore, this study asks: What barriers do leaders face in adopting employee-GenAI advice? What are the corresponding intervention mechanisms? And how can leaders overcome these barriers to achieve effective adoption?
2. Research Status
As AI technology develops, decision-makers have access to increasingly diverse advice sources (Wei & Zhang, 2014; Li et al., 2022). To fully understand decision-makers' perceptual differences between human and GenAI advice and thereby optimize decision processes and enhance advice adoption effectiveness, it is necessary to deeply investigate leaders' psychological mechanisms during decision-making.
First, existing research inadequately explores boundary conditions for decision-makers' comparative reactions to human versus AI advice. Some studies focus on cognitive psychology, such as the moderating effects of subjective factors like trust, algorithm aversion, and algorithm appreciation. Specifically, leaders' trust in GenAI is influenced by perceived accuracy (Williams, 2020); algorithm aversion shows that even when GenAI advice is superior, leaders may reject it due to inability to trace error responsibility (Dietvorst et al., 2018). After understanding algorithmic principles, some leaders may exhibit "AI appreciation" (Qin et al., 2025). Other research examines situational variables, such as decision-makers' greater propensity to adopt GenAI advice in data-driven tasks (You et al., 2022) and preference for more ethically aligned employee advice in high-risk contexts (Jin et al., 2025). However, these studies focus on single dimensions, lacking analysis of different boundary conditions. Integrated analysis of interactive variables like task type and advice strategy is needed. In practice, decision-makers often employ diverse strategies across task types, yet existing research has not fully revealed this dynamic influence mechanism. Future research must more systematically examine boundary conditions in human-AI advice comparison to construct more comprehensive theoretical frameworks.
Second, current GenAI advice research is limited in scope. Existing studies focus on practical applications in organizations, exploring how different functional scenarios affect advice adoption, with results concentrated in data-intensive fields while paying insufficient attention to more general management decision-making scenarios. In practice, leaders do not always face data-driven tasks but rather complex decisions requiring weighing multi-dimensional factors like employee emotions and team collaboration. Existing research struggles to explain leaders' behavioral logic when weighing human versus AI advice in such contexts. Although some studies mention human-AI integrated decision-making (Choudhary et al., 2025), proposing that multi-source collaborative advice outperforms single sources, these explorations remain at the effect verification level without deep analysis of decision-makers' emotional, cognitive, and behavioral reactions. Therefore, it is necessary to expand GenAI advice adoption research to the team level, exploring responsibility attribution across three advice sources—employee, GenAI, and employee-GenAI teams—and how these influence leader advice adoption.
Third, social comparison theory application remains one-dimensional with fragmented theoretical frameworks. Social comparison theory has been widely applied in traditional advice adoption research, focusing primarily on how ability or performance comparisons between advisors and decision-makers affect adoption behavior. When evaluating employee advice, leaders unconsciously engage in "upward comparison" and "downward comparison" (Matthews & Kelemen, 2025). During upward comparison, leaders may feel self-threat—especially when subordinates' advice implies their own inadequacy—and reject advice to protect self-esteem (Rizzo et al., 2024). Conversely, parallel and downward comparisons may increase advice adoption likelihood, particularly when advice is perceived as enhancing organizational performance or personal capability (Perry-Smith & Mannucci, 2017). Other research identifies social comparison orientation as a key individual difference variable; leaders high in social comparison orientation pay greater attention to employee advice and are more susceptible to employee performance, while those low in orientation rely less on employee comparisons and focus more on advice quality itself (Gerber et al., 2018). These studies support a "comparison psychology → emotional response → adoption behavior" mechanism, but their application is limited to human advice comparison. In human-AI advice adoption, although research finds leaders high in social comparison orientation may prefer GenAI advice to "avoid interpersonal conflict" (Rizzo et al., 2024), such studies only treat social comparison orientation as a moderator without deeply excavating the theory's core dimensions, still failing to explain leaders' comparison logic regarding GenAI advice. In summary, insufficient social comparison theory application manifests at two levels: in comparison dimensions, no research has explored the roles of "social dynamic comparison, performance-reward comparison, and agency capability comparison" in human-AI interaction scenarios (Matthews & Kelemen, 2025); in theoretical application levels, cross-level comparison mechanisms from individual to team remain unclarified. Therefore, it is necessary to strengthen theoretical integrity and coherence, expand theoretical application scenarios, and use social comparison theory as a core framework to construct a multi-level comparison model for human-AI advice differential perception.
Finally, existing research pays low attention to practical issues. Most studies analyze antecedents of advice adoption, such as various objective performance metrics of algorithmic technology. Algorithmic accuracy helps enhance advice credibility, with high accuracy reducing decision risk and increasing leader adoption probability (Baines et al., 2024); explainability, as a key technical attribute, has been found to cause leaders to question GenAI advice's "black box" characteristics (Ribeiro et al., 2016), triggering distrust (Glikson & Woolley, 2020). However, existing research lacks intervention strategy studies for practical issues like mitigating algorithm aversion, reducing identity threat, and optimizing team advice risks. This severely limits practical guidance value. Therefore, it is necessary to analyze effective intervention strategies, such as training and incentives, to provide practical solutions for organizations to optimize human-AI collaborative advice adoption effectiveness.
3. Research Framework
Based on social comparison theory's three core dimensions—social dynamic comparison, performance-reward comparison, and agency capability comparison (Matthews & Kelemen, 2025)—this study constructs an integrated model of leaders' differential perception of employee-GenAI advice adoption. The research framework systematically unfolds through five interconnected sub-studies: Study 1 explores social comparison orientation as a boundary condition, examining its interactive effects on advice quality comparison and appreciation; Study 2 examines characteristic differences between two advice sources, analyzing mechanisms affecting leader emotions, behaviors, and decision effectiveness with relative superiority as a moderator; Study 3 tests effectiveness differences between the two advice types in task work versus interpersonal work; Study 4 analyzes potential risks in employee teams, GenAI teams, and employee-GenAI collaborative teams from a team perspective, explaining how different advice risks affect leader decisions through responsibility attribution processes; Study 5 clarifies how advisor psychological and functional barriers affect advice quality and how intervention strategies enhance employee-GenAI team collaborative advice quality. The overall research framework is shown in Figure 1 [FIGURE:1].
3.1 Study 1: Social Comparison Orientation in Employee-GenAI Advice Quality
Based on social comparison theory, this study focuses on advice quality comparison between employees and GenAI, exploring how advice quality differences influence adoption behavior through leader affective attitudes (appreciation), introduces social comparison orientation as a boundary condition, reveals differential reaction mechanisms to the two advice types, and constructs an affective response process model of leader reactions to employee-GenAI advice. The theoretical model is shown in Figure 2 [FIGURE:2].
3.1.1 Advice Quality Comparison (Employee vs. GenAI) and Advice Appreciation
Advice quality is a core antecedent influencing decision-makers' adoption intentions (Ikeda, 2024): high-quality advice significantly increases adoption likelihood, while low-quality advice enhances rejection probability (Yaniv, 2004). Advice quality judgment is complex and multidimensional. Existing research primarily measures advice quality using scales by Goldsmith and MacGeorge (2000), Jones and Burleson (1997), MacGeorge et al. (2004), and Feng and MacGeorge (2010). These scales generally use single dimensions expressing advice helpfulness, appropriateness, sensitivity, effectiveness, and supportiveness (MacGeorge et al., 2002; Goldsmith & MacGeorge, 2000; MacGeorge et al., 2004; Feng & MacGeorge, 2010), but these studies have not developed advice quality scales more meticulously.
Recently, organizational behavior and human resource management scholars have begun studying voice quality (Brykman & Raver, 2021; Farh et al., 2024; Ng et al., 2022; Parke et al., 2022; Wolsink et al., 2019; Jiang et al., 2025). Researchers have used dummy variables (0/1 variable for high/low voice quality) (Wolsink et al., 2019; Parke et al., 2022), single-dimensional scales (Ng et al., 2022), and multi-dimensional scales (Brykman & Raver, 2021). Brykman and Raver's (2021) four-dimensional scale measures voice principledness, feasibility, novelty, and organizational focus, providing a foundation for advice quality judgment.
Social comparison is a "basic, universal, and powerful human tendency" (Corcoran et al., 2011). Based on social comparison theory (Festinger, 1954), leaders allocate cognitive resources and form differential appreciation attitudes by comparing employee and GenAI advice quality (Mussweiler, 2003; You et al., 2022). In other words, individuals' appreciation for different advice changes as their quality perceptions change (Pescetelli & Yeung, 2021). When perceiving GenAI advice quality as superior, leaders prioritize its instrumental value, increasing appreciation for GenAI advice while potentially decreasing appreciation for employee advice due to limited cognitive resources; conversely, when judging employee advice quality as superior, leaders enhance appreciation for employee advice and weaken recognition of GenAI advice (Dang & Liu, 2024; You et al., 2022).
Based on this discussion, we propose Propositions 1 and 2:
Proposition 1: When leaders perceive GenAI advice quality as higher than employee advice quality, their appreciation for GenAI advice increases positively (positive path), while indirectly decreasing appreciation for employee advice (negative path).
Proposition 2: When leaders perceive employee advice quality as higher than GenAI advice quality, their appreciation for employee advice increases positively (positive path), while indirectly decreasing appreciation for GenAI advice (negative path).
3.1.2 Advice Appreciation and Leader Advice Response
Affect is a key mediator connecting advice evaluation and adoption behavior. Positive affect not only enhances individuals' liking for advice itself but also stimulates helping and generous behaviors (Ruan et al., 2024; Milyavsky & Gvili, 2024), and significantly strengthens trust in others (Jones & George, 1998). Therefore, positive affect enhances leaders' trust in advisors, thereby increasing adoption intention. Specifically, GenAI advice may be perceived as more capable in technical or mathematical tasks (Longoni & Cian, 2020), and leaders' appreciation for GenAI advice strengthens their recognition of GenAI's capability advantages, increasing adoption intention while potentially reducing acceptance of employee advice through contrast effects. Conversely, appreciation for employee advice makes leaders value its characteristics of fitting organizational culture and emotional needs (Bailey et al., 2023), inclining them toward employee advice and potentially generating rejection psychology toward GenAI advice (Logg et al., 2019).
In summary, leaders' appreciation for GenAI versus employee advice not only directly affects their adoption intentions but also indirectly influences their acceptance attitudes toward the other advice type through changes in affective states (Logg et al., 2019; You et al., 2022). This complex psychological mechanism reveals how leaders balance different advice sources and make trade-offs between appreciation and adoption. Based on this, we propose Propositions 3 and 4:
Proposition 3: Leaders' appreciation for GenAI advice positively influences their intention to adopt GenAI advice but negatively influences their adoption of employee advice.
Proposition 4: Leaders' appreciation for employee advice positively influences their intention to adopt employee advice but negatively influences their adoption of GenAI advice.
3.1.3 The Moderating Role of Social Comparison Orientation
Social comparison orientation refers to individuals' tendency to actively seek or attend to comparisons with others in social situations. Gibbons and Buunk (1999) developed a scale to measure social comparison orientation, expressing that self-evaluation largely depends on others' performance. These individuals tend to connect what happens to others with themselves and are interested in others' characteristics and achievements in similar situations. Social comparison orientation is similar to goal orientation in expressing a stable individual trait, reflecting personality-driven individual differences. Therefore, it is often placed under the umbrella of social comparison theory but has a narrower scope, primarily referring to an individual's tendency or motivation (Gibbons & Buunk, 1999; Gerber et al., 2018).
Individuals high in social comparison orientation are sensitive to interpersonal relationships, easily influenced by others' performance, and prioritize social evaluation and interpersonal risk in decision-making (Gerber et al., 2018), whereas those low in orientation rely less on social comparison for self-evaluation. This orientation difference moderates the relationship between advice quality comparison and advice appreciation: high social comparison orientation leaders, due to high interpersonal sensitivity (Kämmer et al., 2023), prefer choices that avoid interpersonal risk. Facing high-quality employee advice, high social comparison orientation leaders easily perceive upward comparison threats (worrying about their competence being questioned) and顾忌 adopting specific employees' advice to avoid triggering team conflict (Rizzo et al., 2024). GenAI advice does not involve interpersonal ability comparison, avoiding both self-threat and interpersonal conflict, thus individuals high in social comparison orientation are more willing to accept GenAI advice, while low social comparison orientation leaders focus more on actual decision scenarios and more easily appreciate employee advice quality advantages.
We infer that this individual trait may also affect emotional reactions and behavioral outcomes for different advice sources (Tai et al., 2024). High social comparison orientation individuals care more about others' opinions and are more susceptible to external references, leading us to infer they may show preference for GenAI advice. Conversely, low social comparison orientation individuals prefer employee advice. In advice judgment analysis, psychological weight allocated to a given advice directly relates to its adoption rate (Tversky & Koehler, 1994). Accordingly, we propose:
Proposition 5: For individuals high in social comparison orientation, the positive effect of GenAI advice quality on its appreciation is amplified, while the positive effect of employee advice quality is attenuated.
Proposition 6: For individuals low in social comparison orientation, the positive effect of employee advice quality on its appreciation is amplified, while the positive effect of GenAI advice quality is attenuated.
Social information processing theory states that people's interpretation of information cues is socially contextualized; the same information may be assigned different meanings in different social contexts (Lord & Smith, 1983). High social comparison orientation individuals tend to [text incomplete in original]; low social comparison orientation individuals focus more on relational attributes of employee advice, more easily generating appreciation emotions when employee advice quality is superior, thus adopting employee advice (Chen et al., 2025). Therefore, we propose:
Proposition 7: For individuals high in social comparison orientation, GenAI advice quality advantage positively influences leaders' intention to adopt GenAI advice by increasing appreciation for GenAI advice.
Proposition 8: For individuals low in social comparison orientation, employee advice quality advantage positively influences leaders' intention to adopt employee advice by increasing appreciation for employee advice.
3.2 Study 2: Comparative Study on Decision Effectiveness of Employee-GenAI Advice Sources
Social comparison theory posits that individuals experience emotional reactions when comparing themselves with others (Dang & Liu, 2024), which affect subsequent behaviors (Matthews & Kelemen, 2025). Advice from various sources serves as important assistance for leader decision-making; good advice helps leaders optimize decisions and improve decision effectiveness through cognitive supplementation and efficiency enhancement (Chakraborty et al., 2024). This study compares objective characteristic differences between advice sources, analyzes their impact on decision effectiveness through affective mediation and advice adoption/rejection behavior mediation, and introduces relative superiority as a moderator. The theoretical model is shown in Figure 3 [FIGURE:3].
3.2.1 Advice Source Comparison (Employee vs. GenAI)
Explainability, availability, and accuracy are important dimensions for evaluating advice source quality, directly affecting decision effectiveness. Explainability determines whether advice is easily understood and trusted (Chakraborty et al., 2024); accuracy directly affects decision quality (Mahmud et al., 2022); availability affects decision efficiency (Brynjolfsson & McAfee, 2014). These dimensions not only reveal advantages and disadvantages of different advice sources but also help leaders better select and rely on different advice sources during decision-making.
Explainability refers to advice's logical clarity, understandability, and causal transparency (Miller, 2019). Employee advice, based on personal experience and organizational context, can articulate decision logic in natural language consistent with human cognitive frameworks (Endsley, 2023; Minh et al., 2022). GenAI, constrained by its "black box," has underlying logic difficult for non-technical personnel to understand and may generate unreliable explanations with fabricated data sources, thus having weaker explainability than human advice (Wesche et al., 2024).
Accuracy refers to the degree to which advice matches objective facts, optimal solutions, or task goals (Mahmud et al., 2022), with performance highly dependent on task type and data quality. Leveraging big data and algorithmic advantages, GenAI is typically more objective and accurate in data-intensive tasks (Wesche et al., 2024). However, in ambiguous situations, employees can improve advice accuracy through social insight and tacit knowledge (Davenport & Kirby, 2016), where GenAI may fail due to lack of social insight.
Availability refers to advice accessibility, response speed, and cost efficiency (Ramaul et al., 2024). High availability helps quickly respond to decision needs. Employee advice is constrained by time-space, coordination costs, and organizational hierarchy, may be delayed by workload or filtered by information processing, and cognitive resource limitations restrict their speed in processing large data volumes, making it difficult to provide advice quickly and accurately (Luo et al., 2019). GenAI can break through time-space constraints, providing round-the-clock instant service with extremely high availability (Tong et al., 2021). Therefore, comprehensively, GenAI advice sources have stronger availability than human employees.
3.2.2 Positive Response Path: Appreciation and Leader Advice Adoption
GenAI advice's high availability and relatively high accuracy give it significant advantages in urgent or high-intensity decision contexts, easily triggering leader appreciation and promoting advice adoption. Specifically, GenAI leverages massive data and algorithmic analysis to effectively avoid potential human cognitive biases (Glikson & Woolley, 2020), often making more accurate judgments than employee advice in structured tasks (Tong et al., 2021). Additionally, GenAI's instant response characteristics can quickly meet decision needs, with efficiency advantages particularly significant in emergency situations (Agrawal et al., 2022). In decision contexts emphasizing efficiency and data-driven approaches, GenAI can more efficiently provide objective, unbiased information, and this comprehensive advantage easily generates leader appreciation. Since affective reactions influence subsequent behaviors (Van Kleef, 2009), leaders' appreciation for GenAI advice transforms into positive cognition and trust, strengthening beliefs that GenAI helps solve problems and reduce risks, thus inclining them toward adoption. Based on this, we propose:
Proposition 9: Compared to employee advice, GenAI advice's advantages lead leaders to appreciate GenAI advice and subsequently adopt it.
The ultimate purpose of advice adoption is to enhance decision effectiveness—achieving organizational goals by selecting optimal action plans (Burton et al., 2020). Its core manifests in objective decision quality and execution acceptance, such as rationality and timeliness (Shamim et al., 2020). Advice source explainability, accuracy, and availability are all closely related to decision effectiveness; advice with dimensional advantages is more likely to optimize decision outcomes.
In the positive response path, GenAI gains leader appreciation and adoption through high availability and accuracy advantages. Adopting such advice directly improves decision efficiency and scientific quality, reducing human cognitive biases, enhancing decision objectivity and accuracy (Glikson & Woolley, 2020), and accelerating response speed in emergencies. Therefore, adopting GenAI advice with key advantages typically positively impacts decision effectiveness (Baines et al., 2024). We thus propose:
Proposition 10: Advice sources indirectly affect decision effectiveness through the serial mediation of appreciation and leader advice adoption.
3.2.3 Negative Response Path: Aversion and Leader Advice Rejection
Despite GenAI's clear advantages in availability and accuracy, its "black box" operation results in low explainability, easily causing negative leader reactions. Because GenAI decision processes often lack transparency, leaders struggle to understand advice logic and generation mechanisms (Burton et al., 2020; Glikson & Woolley, 2020). In contrast, employee advice is typically based on experience and intuition, more easily understood and accepted by leaders (Doshi-Velez & Kim, 2017). Individuals prefer trustworthy information sources that are easy to understand (Hogg et al., 1995), so when leaders cannot comprehend GenAI advice, they easily generate negative emotions like distrust, anxiety, or even aversion (Glikson & Woolley, 2020). In complex or high-risk decision contexts, leaders need to fully understand advice logic to assume decision responsibility; insufficient explanation exacerbates leaders' unease and distrust, generating aversion emotions (Ahn et al., 2021). In high affect-involvement tasks, GenAI's lack of emotional resonance more easily triggers alienation and aversion (Longoni et al., 2019; Giroux et al., 2022). Such aversion emotions activate leaders' loss aversion psychology, prompting them to reject GenAI advice to avoid uncertainty (Miller et al., 2019).
We therefore propose:
Proposition 11: Compared to employee advice, GenAI advice's disadvantages cause leaders to avert GenAI advice and subsequently reject it.
In the negative response path, low explainability makes GenAI advice susceptible to leader aversion and rejection. If GenAI advice is objectively high-quality, rejection causes leaders to miss opportunities to leverage high-quality data and algorithmic support, leading decisions to rely on other information sources, increasing subjective judgment risk and reducing decision accuracy and efficiency (Cecil et al., 2024). Aversion emotions inhibit adoption intentions (Glikson & Woolley, 2020), and leaders' rejection of high-quality advice may lead to suboptimal decisions, affecting decision rationality and execution effectiveness (Damen et al., 2008). Thus we propose:
Proposition 12: Advice sources affect decision effectiveness through the serial mediation of aversion and leader advice rejection.
3.2.4 The Moderating Role of Relative Superiority
Leaders' subjective cognition of advice sources also affects their reactions. Relative superiority is the degree to which leaders perceive overall advantage of an advice source in comparison, a key moderating variable (Choudhury & Karahanna, 2008). This perception forms an overall impression after "psychological processing" of objective features, amplifying or weakening objective features' impact on subsequent emotional and behavioral reactions (Dang & Liu, 2022).
When leaders perceive GenAI's relative superiority as high, they are more likely to focus on its advantages. Even when GenAI advice objectively lacks explainability, leaders may still appreciate and adopt it. Superiority perception enhances GenAI capability credibility (Shrestha et al., 2019), strengthens decision confidence (Marocco et al., 2024), thereby strengthening appreciation emotions and adoption intentions and improving decision effectiveness. Conversely, when leaders perceive GenAI advice source's relative superiority as low, they are more likely to focus on its disadvantages, amplifying negative cognition (Dietvorst et al., 2015), strengthening aversion emotions toward GenAI, weakening fault tolerance, and more strongly倾向于 rejecting GenAI-provided advice (Glikson & Woolley, 2020).
Based on this, we propose:
Proposition 13: Relative superiority strengthens the positive response path; when leaders perceive GenAI advice source's relative superiority as high, they more easily appreciate and adopt its advice, thereby improving decision effectiveness.
Proposition 14: Relative superiority weakens the negative response path; when leaders perceive GenAI advice source's relative superiority as low, they more easily avert and reject its advice, thereby reducing decision effectiveness.
3.3 Study 3: Comparative Study on Interactive Effects of Employee-GenAI Advice Content and Advice Strategy
Based on social comparison theory, this study explores how GenAI advice and employee advice content (task work vs. interpersonal work) and advice strategy (direct vs. indirect voice) affect advice effectiveness. It aims to reveal the mediating mechanisms of leader identity threat and identity affirmation in interpersonal and task work affecting decision effectiveness, and analyze differential effects of employee versus GenAI advice in upward/downward comparison. The theoretical model is shown in Figure 4 [FIGURE:4].
3.3.1 Differential Effectiveness of Employee-GenAI Interpersonal Work Advice
Social comparison theory emphasizes that individuals tend to evaluate their own abilities and opinions through comparison with others (Festinger, 1954). In interpersonal work, employees are more inclined to compare with human advisors who have similar backgrounds and experiences. Human advisors can provide more emotionally resonant and context-adaptive advice, helping employees better understand and accept advice content and improving decision effectiveness. Peer advice as a prosocial behavior enables employees to perceive "goodwill" and support from colleagues (Zhang et al., 2019). In contrast, GenAI lacks emotional resonance and interpersonal interaction capability, struggling to provide advice meeting employees' emotional needs, resulting in lower decision effectiveness (Baines et al., 2024).
Social comparison also affects individuals' cognition of their own work performance and effort levels. Employees observe and compare each other's performance and outcomes at work (Matthews & Kelemen, 2025). When receiving peer advice, employees are more likely to view it as mutual assistance based on shared goals and experiences, thus more willing to adopt and practice the advice. Additionally, peer advice enhances employees' "feeling of being noticed," and this support from organizational members motivates them to reciprocate with higher investment, thereby improving work performance (Zhang et al., 2019). Conversely, GenAI advice lacks this social relationship-based motivational effect, making it difficult to significantly promote performance improvement like employee advice (Baines et al., 2024). We therefore propose:
Proposition 15: In interpersonal work advice, employee advice effectiveness is significantly higher than GenAI advice.
3.3.2 Differential Effectiveness of Employee-GenAI Task Work Advice
In task work scenarios, the focus of social comparison differs. GenAI possesses powerful data processing and rapid analysis capabilities, enabling it to simulate human thinking processes and intelligent behaviors, giving machines independent production task execution ability (Aghion et al., 2017). When facing tasks requiring extensive data support and rational analysis, individuals evaluating advice effectiveness view GenAI as an information-advantaged reference object. In contrast, employee advice may be insufficient in data processing speed and information comprehensiveness, so GenAI advice often demonstrates higher effectiveness in such contexts. GenAI's unique deep learning and autonomous decision-making functions can provide employees with new insights and valuable decision recommendations. For standardized, procedural task work, GenAI can replace or optimize repetitive, cumbersome, or inefficient work (Chui et al., 2016), with its generated advice providing precise, standardized operational guidance that helps reduce error rates and improve work efficiency and quality. Employee performance in task execution is affected by personal skill levels and work attitudes, while GenAI advice's stability and accuracy make it more competitive in improving work performance, so GenAI advice yields higher work performance than employee advice. We therefore propose:
Proposition 16: In task work advice, GenAI advice effectiveness is significantly higher than employee advice.
3.3.3 The Mediating Role of Leader Identity Threat and Advice Rejection
In interpersonal work contexts, social comparison prompts leaders to focus on self-identity cognition. According to social comparison theory, upward comparison (comparing with superior others) can motivate self-improvement, while downward comparison (comparing with inferior others) can enhance self-esteem and confidence. Employee advice may trigger upward comparison between leaders and employees. If leaders perceive the advice as challenging their work status or competence, they easily develop resistance. Research confirms that when leaders perceive ability or status gaps with advisors, their perceived value and adoption intention for that advice decrease (Duan & Wei, 2012). Leaders may also experience psychological threat or inequality, subsequently affecting advice adoption behavior (Han & Xiao, 2020). When employees advise leaders, it may be interpreted as challenging leader authority or face-threatening, implying employee superiority—this comparison easily generates identity threat for leaders. As authority figures affected by face and competence threats, leaders further倾向于 rejecting employee advice. Thus identity threat and advice adoption behavior play a serial mediating role between advice and advice effectiveness. We therefore propose:
Proposition 17: When advice content concerns interpersonal work, compared to employee advice, GenAI advice is more likely to trigger leader identity threat, subsequently causing leader advice rejection and ultimately resulting in lower advice effectiveness.
Proposition 18: When advice content concerns task work, compared to GenAI advice, employee advice is more likely to trigger leader identity threat, subsequently causing leader advice rejection and ultimately resulting in lower advice effectiveness.
3.3.4 The Mediating Role of Leader Identity Affirmation and Advice Adoption
In task work contexts, leader identity affirmation and advice adoption play mediating roles. When facing advice from employees and GenAI, leaders consciously or unconsciously engage in social comparison, subsequently affecting their self-identity cognition and judgment.
Research shows leader openness traits closely relate to employee advice adoption rates. Leaders with open attitudes have higher advice adoption likelihood (Detert et al., 2007). Pei and Wu (2023) note that when leaders perceive employees' constructive opinions, they are more likely to adopt them; and when employees combine organizational goals with leader concerns and propose suggestions in positive, constructive ways, adoption likelihood further increases. This adoption behavior both provides positive feedback and value affirmation for advice and motivates employees to continue suggesting, thereby improving overall advice effectiveness (Zhang et al., 2020). Thus, leader identity affirmation promotes advice adoption behavior, forming a serial mediation between employee advice and advice effectiveness. We therefore propose:
Proposition 19: When advice content concerns task work, compared to employee advice, GenAI advice is more likely to promote leader identity affirmation, subsequently driving leader advice adoption and ultimately resulting in higher advice effectiveness.
Proposition 20: When advice content concerns interpersonal work, compared to GenAI advice, employee advice is more likely to promote leader identity affirmation, subsequently driving leader advice adoption and ultimately resulting in higher advice effectiveness.
3.3.5 Differential Moderating Effects of Direct and Indirect Voice Strategies
In Chinese society that values interpersonal harmony, there is a principle of "praising good deeds publicly, admonishing faults privately." Even with differing opinions, people倾向于 private communication to maintain face. This suggests different voice strategies may differentially affect leaders' face threat perception. For example, direct voice may make leaders feel humiliated or offended (MacGeorge et al., 2004), while indirect voice poses relatively weaker face threat. Advice strategies can be divided into direct and indirect voice: direct voice refers to employees voicing opinions transparently and directly. However, this direct approach may be perceived as threatening because it implies subordinates are instructing superiors, demonstrating subordinates know more and are more capable. It also declares that superiors' previous decisions or plans need correction or are erroneous (Dalal & Bonaccio, 2010). In contrast, indirect voice refers to employees proposing suggestions in humble, respectful ways that maintain leader dignity (Dillard et al., 1997). When using this strategy, employees consider superiors' feelings, thereby increasing information acceptance. In fact, indirect voice is also considered a manipulative advice strategy (Han et al., 2017); employees using this approach consider leaders' feelings, enhancing leader acceptability. Although direct voice is not defined as hostile to speakers, it easily leaves impressions of disrespect and assertiveness, thus direct voice may affect leaders' identity threat perception, while indirect voice may affect leader identity affirmation. We therefore propose:
Proposition 21: The effect of advice content on advice rejection through leader identity threat is moderated by direct voice strategy.
Proposition 22: The effect of advice content on advice adoption through leader identity affirmation is moderated by indirect voice strategy.
3.4 Study 4: Comparative Study on Responsibility Mechanisms in Employee-GenAI Team Advice Response
Based on attribution theory, this study analyzes potential risks in employee teams, GenAI teams, and employee-GenAI collaborative teams, exploring their differential effects on leader advice adoption or rejection. It constructs a "advice risk—leader response" model to reveal different influence mechanisms of three team advice types on leader decision-making, thereby clarifying intervention mechanisms of responsibility attribution to explain how different team advice risks affect leader decisions through responsibility definition processes. The theoretical model is shown in Figure 5 [FIGURE:5].
3.4.1 Team Advice Risk and Leader Advice Response
Employee-GenAI team advice mainly presents three forms: employee team advice, GenAI team advice, and employee-GenAI team collaborative advice. Different advice types carry differential risks, leading to different leader advice responses:
(1) Employee Team Advice Risk and Leader Advice Response
Employee team advice typically originates from team members' daily observations, experiences, and reflections. While reflecting frontline experience, it carries certain risks: (a) cognitive bias risk: personal experience, cognitive biases, and emotional factors may cause information partiality or distortion (Kahneman et al., 2011); (b) information overload risk: large volumes of advice may overwhelm management, making effective information screening difficult (Graf & Antoni, 2023); (c) hierarchical pressure risk: employee advice may迎合 leader preferences due to power structures or performance pressure, resulting in non-objective suggestions (Pfrombeck et al., 2023).
These risks weaken employee team advice credibility, prompting leaders to seek more objective alternatives. GenAI team advice, relying on big data analysis and algorithmic optimization, can provide systematic, objective decision support (Brynjolfsson & McAfee, 2014). Employee-GenAI collaborative advice can balance employee creativity and GenAI efficiency, compensating for employee team advice limitations to some extent (Cheng et al., 2023). Based on this, we propose:
Proposition 23: Employee team advice risk leads leaders to reject employee team advice and adopt GenAI team advice or employee-GenAI team collaborative advice.
(2) GenAI Team Advice Risk and Leader Advice Response
Although GenAI team advice possesses data processing advantages, it still carries multiple risks: (a) algorithmic bias risk: input data may contain historical biases or incompleteness (Nelson, 2019), potentially causing unfair decisions; (b) over-dependence risk: long-term reliance on GenAI may weaken employees' autonomous decision-making ability and team innovation and adaptability (Hagendorff, 2020). Blind trust from ignoring GenAI technical limitations further damages decision quality (Jakubik et al., 2022); (c) data privacy and security vulnerability risk: GenAI's dependence on large-scale data may trigger data leaks, threatening personal privacy and enterprise security.
When GenAI system advice cannot meet diverse decision-making needs, leaders may return to employees' experiential and creative advice, believing employee advice better reflects actual conditions and interpersonal factors. Meanwhile, employee-GenAI collaborative advice's complementary advantages also make it a better choice (Cheng et al., 2023). We therefore propose:
Proposition 24: GenAI team advice risk leads leaders to reject GenAI team advice and adopt employee team advice or employee-GenAI team collaborative advice.
(3) Employee-GenAI Collaborative Team Advice Risk and Leader Advice Response
Employee-GenAI collaborative team advice attempts to integrate human-AI advantages, but risks are more complex: (a) collaboration conflict risk: human-AI cognitive inconsistency or goal conflict (Brynjolfsson & McAfee, 2014); (b) responsibility definition risk: difficulty clarifying responsibility归属 when decision errors occur (Cheng et al., 2021); (c) cognitive inertia risk: employees' over-reliance on GenAI advice in collaborative decision-making leads to agency degradation, a social loafing phenomenon particularly significant in complex tasks; (d) ethical conflict risk: when GenAI advice conflicts with employee values, it may trigger ethical disputes (Siau & Wang, 2020).
Thus, the complexity of employee-GenAI collaborative advice risk puts leaders in decision dilemmas, unable to trust employee advice alone or recognize GenAI advice, leading to overall rejection of all three team advice types. Based on this, we propose:
Proposition 25: Employee-GenAI collaborative advice risk leads leaders to reject all three types of team advice.
3.4.2 The Mediating Role of Responsibility Attribution
Based on attribution theory, this study deeply explores the internal mechanisms through which employee-GenAI team advice risks affect leader advice response, focusing on how three levels—employee team responsibility attribution, GenAI team responsibility attribution, and shared responsibility attribution—influence leader decision-making behavior. It clarifies how different advice risk types shape leader team advice response processes through responsibility attribution patterns.
First, employee team advice risk affects leader decisions through employee team responsibility attribution. When employee team advice exhibits cognitive bias or information overload, leaders倾向于 internal attribution, attributing problems to employee competence limitations or motivational factors. This employee team responsibility attribution leads to neglecting employee advice (Aschauer et al., 2024) and转向 adopting more objective GenAI advice or collaborative advice. Second, GenAI team advice risk affects leader decisions through GenAI responsibility attribution. Facing GenAI advice's algorithmic bias or data security issues, leaders engage in technical attribution, attributing responsibility to system defects, thus rejecting GenAI advice and seeking human experience supplementation in employee advice or collaborative advice. Third, employee-GenAI collaborative team advice risk affects leader decisions through shared responsibility attribution. When collaborative advice results are unsatisfactory, leaders form shared responsibility attribution, being unable to clearly demarcate human-AI responsibility boundaries, creating decision dilemmas and rejecting all advice types.
(1) The Mediating Role of Employee Team Responsibility Attribution
When employee advice fails to meet organizational expectations or shows deviation, managers倾向于 attributing failure to employees' personal abilities or subjective motivations rather than external factors, a process called internal attribution (Weiner, 1985). According to attribution theory, when managers attribute employee advice failure to employees' personal competence or emotional factors, it triggers negative evaluation and leads to employee advice rejection (Shaver, 2016). In contrast, leaders view GenAI team advice's algorithmic objectivity and employee-GenAI collaborative advice's human-AI complementarity as more reliable choices (Cheng et al., 2023). We therefore propose:
Proposition 26: Employee team responsibility attribution mediates the relationship between employee team advice risk and leader advice response (rejecting employee team advice, adopting GenAI team advice or employee-GenAI team collaborative advice).
(2) The Mediating Role of GenAI Team Responsibility Attribution
According to attribution theory, GenAI team advice failure often triggers responsibility attribution to GenAI systems, particularly when its advice fails to solve problems (Miller, 2019). GenAI system advice quality may be affected by data bias, data incompleteness, or algorithmic errors (Nishant et al., 2024), and leaders attribute GenAI system failure to technical defects or data issues rather than external environmental influences. This attribution weakens leader trust in GenAI, believing its advice lacks controllability, and subsequently rejecting GenAI team advice. In contrast, employee team advice's human controllability and human supervision advantages in employee-GenAI collaborative advice are more easily recognized (Brynjolfsson & McAfee, 2014). Advice with human participation, though potentially containing cognitive biases, is typically more transparent and understandable than pure GenAI system's "black box" nature. Based on human-machine contrast psychology, leaders prioritize adopting employee team advice and employee-GenAI team collaborative advice. We therefore propose:
Proposition 27: GenAI team responsibility attribution mediates the relationship between GenAI team advice risk and leader advice response (rejecting GenAI team advice, adopting employee team advice or employee-GenAI team collaborative advice).
(3) The Mediating Role of Shared Responsibility Attribution
Employee-GenAI collaborative team advice can combine AI's technical advantages with employee creativity (Yue & Li, 2023), yet this collaboration carries potential risks from both technical defects and human operational limitations. When employee-GenAI collaborative team advice yields unsatisfactory results, leaders form shared responsibility attribution, not fully attributing responsibility to either GenAI or employees (Cheng et al., 2021).
This attribution may trigger leaders' emotional and cognitive conflicts, mediating managers' decision behaviors (Weiner, 1985). Emotionally, when responsibility is shared, managers may distrust both GenAI and employee advice because failure cannot be clearly blamed on a single party (Schoenherr & Thomson, 2024). Unclear responsibility boundaries may cause managers to distrust all advice, no longer倾向于 adopting any party's advice, choosing to avoid decision risk (Jakubik et al., 2022). Ultimately, leaders reject employee, GenAI, and collaborative team advice due to inability to clearly identify accountability targets. We therefore propose:
Proposition 28: Shared responsibility attribution mediates the relationship between employee-GenAI collaborative team advice risk and leader advice response (rejecting all three team advice types).
3.4.3 The Moderating Role of Leader Advice Risk Preference
Leader advice risk preference refers to leaders' tolerance for potential risks when evaluating team advice. Leaders' risk preference levels show individual differences (Tversky & Kahneman, 1992), which moderate the path through which advice risk affects leader response via responsibility attribution. Risk-averse leaders are more sensitive to responsibility ambiguity; even with low risk levels, they may strengthen negative responsibility attribution due to accountability issues and reject advice. Risk-seeking leaders focus more on advice's potential value, may tolerate certain risks, weaken responsibility attribution's negative impact on advice adoption, and attempt to adopt and optimize controllable risks (such as local cognitive biases or correctable technical defects).
Proposition 29: Advice risk affects leader advice response through team responsibility attribution, moderated by leader advice risk preference.
3.5 Study 5: Comparative Study on Barriers and Intervention Mechanisms in Employee-GenAI Advice Response
Based on social comparison theory, this study explores advisor psychological and functional barriers and their impact on advice quality. It examines leader response differences to employee and GenAI advice and tests how intervention strategies enhance employee-GenAI collaborative advice quality. The theoretical model is shown in Figure 6 [FIGURE:6].
3.5.1 Advisor (Employee vs. GenAI) Barrier Comparison
In work contexts, when employees compare themselves with GenAI, they may face psychological and functional barriers. At the macro level, Rjab et al. (2023) use the Technology-Organization-Environment (TOE) framework to categorize AI adoption barriers into three major types—technological, organizational, and environmental—with 18 aspects. Booyse and Scheepers (2024) divide advice adoption barriers into: human social dynamics, restrictive regulations, creative work environments, lack of trust and transparency, dynamic business environments, loss of power and control, and ethical concerns. At the micro level, advice barrier factors divide into two categories: functional barriers and psychological barriers. Functional barriers include usage barriers, value barriers, and risk barriers; psychological barriers include traditional barriers and impression barriers. Since this study's comparison objects are individuals and teams, we draw on Mahmud et al.'s (2023) micro-level concepts of psychological and functional barriers. Functional barriers arise from perceptions of substantive changes required for technology adoption (such as usage methods, value, and risk), including usage barriers, value barriers, and risk barriers. Psychological barriers stem from conflicts between existing beliefs and innovative ideas, focusing on affective-cognitive levels, while functional barriers focus on capability-technical levels (Mahmud et al., 2023).
According to social comparison theory, employees may face psychological and functional barriers when comparing with GenAI (Matthews & Kelemen, 2025). Correspondingly, when employees perceive controllable barriers between themselves and GenAI, this comparative pressure may transform into self-improvement motivation, promoting advice quality improvement (Yang et al., 2025). However, if perceived barriers are too large, they may trigger anxiety or reduced self-efficacy, weakening motivation (Edwards et al., 2024). Comparative pressure may cause self-doubt and affect performance (Matthews & Kelemen, 2025). Large gaps often lead to negative emotions and helplessness; employees may employ defensive strategies to devalue GenAI advice to maintain self-esteem,反而 inhibiting their own advice quality improvement (Böhm et al., 2023). Compared to improvement motivation generated under low barriers, excessive barriers may weaken employee confidence and performance (Van & Jenna, 2021). We therefore propose competing propositions:
Proposition 30: Advisor barrier comparison (psychological barriers, functional barriers) affects employee advice motivation comparison. When employees perceive barriers, they generate positive self-improvement motivation, thereby improving advice quality.
Proposition 31: Advisor barrier comparison (psychological barriers, functional barriers) affects employee advice motivation comparison. When employees perceive barriers, they devalue GenAI advice quality, thereby reducing advice quality.
3.5.2 The Mediating Role of Advice Motivation Comparison
Social comparison theory (Festinger, 1954) states that when employees perceive capability gaps with GenAI, they may generate improvement motivation to close gaps and improve work performance (Wesche et al., 2022). Improvement motivation helps employees provide higher-quality advice, increasing leader advice adoption likelihood (Cao et al., 2025). Therefore, in the comparison process between employees and GenAI, improvement motivation typically promotes advice quality improvement; high-quality advice subsequently enhances leader trust, ultimately increasing leader advice adoption probability (Rizzo et al., 2024).
Conversely, low employee motivation affects advice quality and subsequently leader feedback. According to social exchange theory (Blau, 1964), leader feedback on advice typically depends on employee performance. If employees fail to improve advice quality due to low motivation, leaders are more likely to reject advice (Cao et al., 2025). Social comparison theory indicates that when employees perceive large barriers, psychological pressure may increase, reducing advice quality and ultimately causing leader rejection (Wood, 1996). Research has proven that when employees have low acceptance of GenAI advice, they generate negative emotions affecting advice quality (Wiesche et al., 2024). Additionally, excessively high GenAI evaluation standards may weaken employee confidence, inhibit advice behavior, and even affect organizational innovation climate (SimanTov-Nachlieli, 2025). Under such multiple pressures, employee advice quality easily declines, triggering negative leader feedback. We therefore propose:
Proposition 32: Advice motivation comparison positively mediates the relationship between advisor barrier comparison and leader advice adoption. Barriers trigger positive advice motivation in employees, improving advice quality and promoting leader advice adoption.
Proposition 33: Advice motivation comparison negatively mediates the relationship between advisor barrier comparison and leader advice rejection. Barriers trigger negative advice motivation in employees, reducing advice quality and increasing leader advice rejection.
3.5.3 The Moderating Role of Advice Infighting
Advice infighting refers to opinion disagreements and conflicts among team members regarding advice content, which may stimulate employees' competitive awareness in team work and subsequently affect their advice motivation and behavior (Bucher et al., 2024). Its core characteristics relate to task conflict, team conflict, and employee voice dissent (Kim & Cho, 2024). Research shows that when team members disagree over advice content, this conflict may affect employees' advice willingness, advice quality, and ultimate leader feedback (Erkutlu & Chafra, 2015).
Moderate advice infighting can be viewed as task conflict—rational debate around advice content (Jehn, 1995). Moderate task conflict benefits stimulating member cognitive diversity and competitive awareness, thereby improving advice quality and team performance (De Dreu & Weingart, 2003). In this process, to stand out in advice competition, employees may pay more attention to data support and content innovation, thereby optimizing advice strategies (Ng & Feldman, 2012; Popelnukha et al., 2022).
However, if infighting exceeds rational bounds, it may evolve into relationship conflict, damaging team cooperation (Wibberley & Saundry, 2016). At this point, employees may view competition as a threat, generating anxiety, distrust, and reducing advice willingness (Hyman, 2018). Additionally, high-level advice conflict may weaken employees' emotional investment, making them倾向于 silence or reducing constructive advice (Mowbray et al., 2022). Especially when internal team trust levels are low, members may adopt defensive strategies such as avoiding advice competition, reducing information sharing, or even refusing collaboration (Tangirala & Ramanujam, 2008).
Therefore, advice infighting can be viewed as a dynamic team advice conflict phenomenon with potentially facilitative or inhibitive effects. Moderate advice competition can stimulate employees to improve advice quality and enhance organizational performance; excessive competition may destroy team trust, inhibit advice behavior, and affect innovation capability (Kim & Cho, 2024). We therefore propose:
Proposition 34: Advice infighting can enhance the effect of barrier comparison, stimulating positive advice motivation effects and improving advice quality.
Proposition 35: Advice infighting can weaken or reverse the effect of barrier comparison, triggering negative advice motivation effects and reducing advice quality.
3.5.4 Intervention Strategies
Intervention strategies (such as training, incentives, and support) can effectively enhance employee motivation and self-efficacy, thereby improving work performance (Dai et al., 2024). Research shows that employees receiving external support and resources maintain higher work enthusiasm and demonstrate stronger adaptability when facing challenges (Newby et al., 2021). Effective intervention strategies improve employee advice quality through dual pathways: on one hand, skill training and psychological support enhance employees' ability and confidence to cope with barriers, thereby improving self-efficacy and advice quality (van den Heuvel et al., 2015). On the other hand, intervention strategies help employees adapt to technological change and leverage their advantages, significantly increasing high-quality advice behavior especially in complex technological environments (Na-Nan & Sanamthong, 2020; Kim & Cho, 2024).
However, if intervention strategies mismatch employee actual needs, they may not enhance confidence but instead increase frustration and helplessness (Dai et al., 2024). When employees perceive large gaps with GenAI technology and organizational training cannot effectively compensate, their advice motivation significantly decreases, even generating negative emotions. According to social comparison theory (Festinger, 1954), ineffective interventions reduce self-efficacy and inhibit advice behavior (Xavier & Korunka, 2025). For example, training content disconnected from actual needs, lack of long-term tracking and feedback mechanisms, or inappropriate incentive systems may make employees feel their efforts are unrecognized, reducing work motivation (Wang & Chuang, 2024) and decreasing advice willingness (Kim et al., 2020).
Therefore, the role of intervention strategies in employee advice depends on implementation effectiveness. Effective training and incentive measures can enhance employee self-efficacy, improve work motivation, and increase advice quality (Schemmer et al., 2023). Ineffective interventions may backfire, inhibiting advice behavior (Newby et al., 2021). Managers must provide personalized support based on employee actual conditions to ensure intervention strategies play positive roles (van den Heuvel et al., 2015). We therefore propose:
Proposition 36: Effective intervention strategies enhance positive incentive effects, thereby strengthening the impact of employee advice quality on leader advice adoption.
Proposition 37: Ineffective intervention strategies weaken positive incentive effects or even generate negative impacts, thereby strengthening the effect of devaluing GenAI advice quality on leader advice rejection.
4. Theoretical Construction
Based on social comparison theory's three aspects—social dynamic comparison, performance-reward comparison, and agency capability comparison (Matthews & Kelemen, 2025)—this paper systematically reveals the differential perception mechanism of leader adoption of employee versus GenAI advice in organizational management contexts, constructing a multi-level theoretical framework. It aims to explore leaders' cognitive reactions, affective responses, behavioral reactions, and dynamic influence mechanisms in employee-GenAI advice adoption, providing scientific foundations for organizational decision optimization, human-AI collaborative effectiveness improvement, and AI governance. The theoretical contributions mainly manifest in four aspects:
First, this paper expands social comparison theory's application boundaries and dimensional system, building a multi-dimensional human-AI comparison framework and achieving theoretical breakthrough from human-to-human to human-AI comparison. Traditional social comparison theory focuses primarily on human-to-human comparison, but GenAI's technological breakthroughs present new contexts with non-human comparison objects. This paper extends social comparison theory to human-AI comparison scenarios, integrating a comparison framework of social dynamic comparison, performance-reward comparison, and agency capability comparison, clarifying this framework's key role in leader reactions to GenAI advice. By introducing comparisons across advice source, advice characteristics, advice content, advice quality, advice adoption risk, and advice adoption barriers, this paper not only enriches social comparison dimensions but also reveals unique cognitive and affective mechanisms in human-AI comparison, providing new theoretical pathways for social comparison theory's evolution in the AI era.
Second, this paper constructs an individual-to-team cross-level integrated model of leader human-AI advice response. The research extends from individual-level advice quality perception, affective response, and behavioral adoption to team-level responsibility attribution, risk perception, and intervention mechanisms, forming a complete theoretical chain from micro-cognition to macro-decision-making. By introducing team-level mediators and moderators like responsibility attribution and advice infighting, this paper reveals internal mechanisms through which team advice risk affects leader decision-making,弥补ing existing research gaps in team-level human-AI collaborative advice adoption mechanisms and enriching social comparison theory's research levels at the team level.
Third, this paper proposes and validates an intervention strategy framework for the double-edged sword effect of GenAI advice adoption. It systematically identifies that while GenAI advice improves decision efficiency and objectivity, it may also trigger negative effects like identity threat, responsibility ambiguity, and trust crisis. By introducing advisor barrier comparison and intervention strategies, this paper not only reveals how barriers affect advice motivation and quality but also constructs an intervention path model, helping organizations identify intervention levels for human-AI advice adoption and providing path references for future research on human-AI team advice adoption intervention strategies.
Fourth, this paper promotes interdisciplinary integration among management, psychology, and AI research. It integrates social comparison theory, attribution theory, advice adoption theory, and AI trust research to construct a multi-theoretical cross-framework. By combining GenAI technical attributes like explainability, availability, and accuracy with psychological and social variables like social comparison orientation, identity threat, and responsibility attribution, this paper not only theoretically expands the depth of human-AI interaction research but also methodologically provides an operational variable system and model paradigm for subsequent interdisciplinary research.
References
[The references section contains a mix of Chinese and English citations that should be preserved exactly as written in the original text. Since the user requested preservation of all citations and the references are already in a proper format, they are retained here unchanged.]
段锦云, 魏秋江. (2012). 建议效能感结构及其在员工建议行为发生中的作用. 心理学报,44(7), 972−985.
韩翼, 肖素芳. (2020). 领导为什么拒谏: 基于动机社会视角的阐释. 外国经济与管理, 42(08), 68−80.
韩翼, 董越, 胡筱菲, 谢怡敏. (2017). 员工进谏策略及其有效性研究. 管理学报, 14(12), 1777−1785.
李思贤, 陈佳昕, 宋艾珈, 王梦琳, 段锦云. (2022).人们对人工智能建议接受度的影响因素.心理技术与应用, 10(4), 202−214.
刘伟, 谭文辉. (2025). 人机环境系统融合智能: 超越人类智能的可能性. 清华大学出版社.
裴巧玲, 吴秋洁. (2023). 员工建言与管理者纳言: 一个有调节的中介模型. 现代营销, 2023, (12) :119−121.
魏昕, 张志学. (2014). 上级何时采纳促进性或抑制性进言? 上级地位和下属专业度的影响. 管理世界, (1), 132−143.
许丽颖, 赵一骏, 喻丰. (2025). 人工智能主管提出的道德行为建议更少被遵从 (人工智能心理与治理专刊). 心理学报, 57(1), 1−23.
章凯, 时金京, 罗文豪. (2020). 建言采纳如何促进员工建言: 基于目标自组织视角的整合机制. 心理学报, 52(02), 229−239.
宗树伟, 杨付, 龙立荣, 韩翼. (2025). 促进还是抑制? 生成式人工智能建议采纳对创造力的双刃剑效应. 心理科学进展, 33(6), 905−915.
张若勇, 闫石, 邵琪. (2019). 同事建议对员工任务绩效影响机制的研究. 兰州大学学报(社会科学版), 47(04), 73−82.
Aghion, P., Jones, B. F., & Jones, C. I. (2017). Artificial intelligence and economic growth. In Agrawal, A., Gans,J. and Goldfarb,A.(Eds.), National Bureau of Economic Research(pp.237-290). University of Chicago Press
Agrawal, A., Gans, J., & Goldfarb, A. (2022). Power and prediction: the disruptive economics of artificial intelligence. Harvard Business Review Press.
Agrawal, A., Gans, J. S., & Goldfarb, A. (2024). Artificial intelligence adoption and system-wide change. Journal of Economics & Management Strategy, 33(2), 327–337.
Ahn, P., Swol, L., Kim, S. J., & Park, H. (2021). Enhanced motivation and decision making from going hybrid. Small Group Research, 53, 104649642110435.
Aschauer, F., Sohn, M., & Hirsch, B. (2024). Managerial advice‐taking—Sharing responsibility with (non) human advisors trumps decision accuracy. European Management Review, 21(1), 186−203.
Bailey, P. E., Leon, T., Ebner, N. C., Moustafa, A. A., & Weidemann, G. (2023). A meta-analysis of the weight of advice in decision-making. Current Psychology, 42(28), 24516−24541.
Baines, J. I., Dalal, R. S., Ponce, L. P., & Tsai, H. C. (2024). Advice from artificial intelligence: a review and practical implications. Frontiers in Psychology, 15, 1390182.
Blau, P. M. (1964). Exchange and power in social life. New York, NY: Wiley.
Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational behavior and human decision processes,101(2), 127−151.
Booyse, D., & Scheepers, C. B. (2024). Barriers to adopting automated organizational decision-making through the use of artificial intelligence. Management Research Review,47(1), 64−85.
Böhm, R., Jörling, M., Reiter, L., & Fuchs, C. (2023). People devalue generative AI's competence but not its advice in addressing societal and personal challenges. Communications Psychology, 1(1), 1−10.
Brykman, K. M., & Raver, J. L. (2021). To speak up effectively or often? The effects of voice quality and voice frequency on peers' and managers' evaluations. Journal of Organizational Behavior, 42(4), 504−526.
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: work, progress, and prosperity in a time of brilliant technologies. W.W. Norton & Company.
Bucher, E., Schou, P. K., & Waldkirch, M. (2024). Just another voice in the crowd? Investigating digital voice formation in the gig economy. Academy of Management Discoveries,10(3), 488−511.
Burton, J. W., Stein, M.-K., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239.
Cao, X., De Zwaan, L., & Wong, V. (2025). Building trust in robo-advisory: technology, firm-specific and system trust. Qualitative Research in Financial Markets.
Cecil, J., Lermer, E., Hudecek, M. F., Sauer, J., & Gaube, S. (2024). Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task. Scientific reports, 14(1), 9736.
Chakraborty, D., Kar, A. K., Patre, S., & Gupta, S. (2024). Enhancing trust in online grocery shopping through generative AI chatbots. Journal of Business Research, 180, 114737.
Chen, Q., Yin, C., & Gong, Y. (2025). Would an AI chatbot persuade you: an empirical answer from the elaboration likelihood model. Information Technology & People, 38(2), 937−962.
Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially responsible ai algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research,71, 1137−1181.
Cheng, B., Lin, H., & Kong, Y. (2023). Challenge or hindrance? How and when organizational artificial intelligence adoption influences employee job crafting. Journal of Business Research, 164, 113987.
Chua, A. Y., Pal, A., & Banerjee, S. (2023). AI-enabled investment advice: Will users buy it?. Computers in Human Behavior, 138, 107481.
Chui, M., Manyika, J., & Miremadi, M. (2016). Where machines could replace humans-and where they can't (yet). The McKinsey Quarterly, 1−12.
Choudhury, V., & Karahanna, E. (2008). The relative advantage of electronic channels: A multidimensional view. MIS Quarterly, 32(1), 179–200.
Choudhary, V., Marchetti, A., Shrestha, Y. R., & Puranam, P. (2025). Human-AI ensembles: When can they work?. Journal of Management, 51(2), 536−569.
Civit, M., Civit-Masot, J., Cuadrado, F., & Escalona, M. J. (2022). A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends. Expert Systems with Applications, 209, 118190.
Corcoran, K., Crusius, J., & Mussweiler, T. (2011). Social comparison: Motives, standards, and mechanisms. Theories in Social Psychology, 119–139.
Dai, Y., Lee, J., & Kim, J. W. (2024). AI vs. human voices: How delivery source and narrative format influence the effectiveness of persuasion messages. International Journal of Human–Computer Interaction, 40(24), 8735−8749.
Dalal, R. S., & Bonaccio, S. (2010). What types of advice do decision-makers prefer? Organizational Behavior and Human Decision Processes, 112(1), 11–23.
Damen, F., Van Knippenberg, B., & Van Knippenberg, D. (2008). Affective match in leadership: Leader emotional displays, follower positive affect, and follower performance. Journal of Applied Social Psychology, 38(4), 868–902.
Dang, J., & Liu, L. (2022). Implicit theories of the human mind predict competitive and cooperative responses to AI robots. Computers in Human Behavior, 134, 107300.
Dang, J., & Liu, L. (2024). Extended artificial intelligence aversion: People deny humanness to artificial intelligence users. Journal of Personality and Social Psychology.
Daschner, S., & Obermaier, R. (2022). Algorithm aversion? On the influence of advice accuracy on trust in algorithmic advice. Journal of Decision Systems, 31(sup1), 77−97.
Davenport, T. H., & Kirby, J. (2016). Only humans need apply: winners and losers in the age of smart machines. Harper Business.
De Dreu, C. K. W., & Weingart, L. R. (2003). Task versus relationship conflict, team performance, and team member satisfaction: A meta-analysis. Journal of Applied Psychology, 88(4), 741–749.
Detert, J. R., & Burris, E. R. (2007). Leadership behavior and employee voice: Is the door really open? Academy of Management Journal, 50(4), 869−884.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology-General, 144(1),114-126.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management science, 64(3), 1155−1170.
Dillard, J. P., Wilson, S. R., Tusing, K. J., & Kinney, T. A. (1997). Politeness judgments in personal relationships. Journal of Language and Social Psychology, 16(3), 297-325.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv: Machine Learning. arXiv preprint arXiv:1702.08608.
Duong, D., & Solomon, B. D. (2023). Analysis of large-language model versus human performance for genetics questions. In medRxiv (p. 2023.01.27.23285115).
Edwards, M. R., Zubielevitch, E., Okimoto, T., Parker, S., & Anseel, F. (2024). Managerial control or feedback provision: How perceptions of algorithmic HR systems shape employee motivation, behavior, and well-being. Human Resource Management, 63(4), 691–710.
Endsley, M. R. (2023). Supporting human-ai teams: transparency, explainability, and situation awareness. Computers in Human Behavior, 140, 107574.
Erkutlu, H., & Chafra, J. (2015). The mediating roles of psychological safety and employee voice on the relationship between conflict management styles and organizational identification. American journal of business, 30(1), 72−91.
Exline, J. J., & Lobel, M. (1999). The perils of outperformance: Sensitivity about being the target of a threatening upward comparison. Psychological Bulletin, 125(3), 307−337.
Farh, C. I., Li, J., & Lee, T. W. (2024). Toward a contextualized view of voice quality, its dimensions, and its dynamics across newcomer socialization. Academy of Management Review, 49(2), 399-428.
Feng, B., & MacGeorge, E. L. (2010). The influences of message and source factors on advice outcomes. Communication Research, 37(4), 553−575.
Festinger, L. (1954). A theory of social comparison processes. Human relations, 7(2), 117−140.
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111−126.
Gerber, J. P., Wheeler, L., & Suls, J. (2018). A social comparison theory meta-analysis 60+ years on. Psychological bulletin,144(2), 177−197.
Gibbons, F. X., & Buunk, B. P. (1999). Individual differences in social comparison: development of a scale of social comparison orientation. Journal of personality and social psychology,76(1), 129−142.
Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022). Artificial intelligence and declined guilt: Retailing morality comparison between human and AI. Journal of Business Ethics, 178(4), 1027–1041.
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627−660.
Goldsmith, D. J., & MacGeorge, E. L. (2000). The impact of politeness and relationship on perceived quality of advice about a problem. Human Communication Research, 26, 234-263.
Graf, B., & Antoni, C. H. (2023). Drowning in the flood of information: a meta-analysis on the relation between information overload, behaviour, experience, and health and moderating factors. European Journal of Work and Organizational Psychology, 32(2), 173–198.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 30(1), 99−120.
Hogg, M. A., Terry, D. J., & White, K. M. (1995). A tale of two theories: A critical comparison of identity theory with social identity theory. Social Psychology Quarterly, 58(4), 255–269.
Huang, Z., Che, C., Zheng, H., & Li, C. (2024). Research on generative artificial intelligence for virtual financial robo-advisor. Academic Journal of Science and Technology, 10(1), 74−80.
Hyman, J. (2018). Employee voice and participation: Contested past, troubled present, uncertain future. Routledge.
Ikeda, S. (2024). Inconsistent advice by ChatGPT influences decision making in various areas. Scientific Reports, 14(1), 15876.
Jakubik, J., Schöffer, J., Hoge, V., Vössing, M., & Kühl, N. (2022, September). An empirical evaluation of predicted outcomes as explanations in human-AI decision-making. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 353-368). Cham: Springer Nature Switzerland.
Jeblick, K., Schachtner, B., Dexl, J., Mittermeier, A., Stüber, A. T., Topalis, J.,… Ingrisch, M. (2024). ChatGPT makes medicine easy to swallow: An exploratory case study on simplified radiology reports. European Radiology, 34(5), 2817–2825.
Jehn, K. A. (1995). A multimethod examination of the benefits and detriments of intragroup conflict. Administrative science quarterly, 256−282.
Jiang, J., Gong, Y., Dong, Y., Han, Y., & Qin, Y. (2025). Unpacking leader critical thinking in employee voice quality and silence frequency. Journal of Occupational and Organizational Psychology, 98(1), e12554.
Jiang, H. H., Brown, L., Cheng, J., Khan, M., Gupta, A., Workman, D., …Gebru, T. (2023). AI art and its impact on artists. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 363–374). Association for Computing Machinery.
Jin, F., & Zhang, X. (2025). Artificial intelligence or human: when and why consumers prefer AI recommendations. Information Technology & People, 38(1), 279−303.
Jones, G. R., & George, J. M. (1998). The experience and evolution of trust: Implications for cooperation and teamwork. Academy of Management Review, 23(3), 531-546.
Jones, S. M., & Burleson, B. R. (1997). The impact of situational variables on helpers' perceptions of comforting messages: An attributional analysis. Communication Research, 24(5), 530−555.
Kahneman, D., Lovallo, D., & Sibony, O. (2011). Before you make that big decision. Harvard business review, 89(6), 50−60.
Kahr, P. K., Rooks, G., Snijders, C., & Willemsen, M. C. (2024). The trust recovery journey. The effect of timing of errors on the willingness to follow AI advice. In Proceedings of the 29th International Conference on Intelligent User Interfaces (pp. 609–622). Association for Computing Machinery.
Kämmer, J. E., Choshen-Hillel, S., MüllerTrede, J., Black, S. L., & Weibler, J. (2023). A systematic review of empirical studies on advice-based decisions in behavioral and organizational research. Decision, 10(2), 107−137.
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.
Kaufmann, E. (2021). Algorithm appreciation or aversion? Comparing in-service and pre-service teachers' acceptance of computerized expert models. Computers and Education: Artificial Intelligence, 2, 100028.
Kim, H. Y., Lee, Y. S., & Jun, D. B. (2020). Individual and group advice taking in judgmental forecasting: Is group forecasting superior to individual forecasting?. Journal of Behavioral Decision Making, 33(3), 287−303.
Kim, T., & Cho, W. (2024). Employee voice opportunities enhance organizational performance when faced with competing demands. Review of Public Personnel Administration, 44(4), 713−739.
Kuosmanen, O. J. (2024). Advice from humans and artificial intelligence: Can we distinguish them, and is one better than the other? (Unpublished Master's thesis). UiT Norges arktiske universitet.
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90−103.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.
Longoni, C., & Cian, L. (2020). Artificial intelligence in utilitarian vs. Hedonic contexts: the "word-of-machine" effect. Journal of Marketing.86(1),96-118.
Lord, R. G., & Smith, J. E. (1983). Theoretical, information processing, and situational factors affecting attribution theory models of organizational behavior. Academy of Management Review, 8(1), 50–60.
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems, 4768–4777.
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: machines vs. Humans: the impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937–947.
MacGeorge, E. L., Feng, B., Butler, G. L., & Budarz, S. K. (2004). Understanding advice in supportive Interactions: Beyond the facework and message evaluation paradigm. Human Communication Research,30, 42−70.
MacGeorge, E. L., Lichtman, R. M., & Pressey, L. C. (2002). The evaluation of advice in supportive interactions: Facework and contextual factors. Human Communication Research, 28(3), 451−463.
Mahmud, H., Islam, A. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175,
Mahmud, H., Islam, A. N., & Mitra, R. K. (2023). What drives managers towards algorithm aversion and how to overcome it? Mitigating the impact of innovation resistance through technology readiness. Technological Forecasting and Social Change,193, 122641.
Marocco, S., Talamo, A., & Quintiliani, F. (2024). From service design thinking to the third generation of activity theory: A new model for designing AI-based decision-support systems. Frontiers in Artificial Intelligence, 7,
Matthews, M. J., & Kelemen, T. K. (2025). To compare is human: A review of social comparison theory in organizational settings. Journal of Management, 51(1), 212−248.
Mesbah, N., Tauchert, C., & Buxmann, P. (2021). Whose advice counts more–man or machine? an experimental investigation of AI-based advice utilization. Proceedings of the 54th Hawaii International Conference on System Sciences, 4083−4092.
Mesmer-Magnus, J. R., & DeChurch, L. A. (2009). Information sharing and team performance: A meta-analysis. Journal of Applied Psychology, 94(2), 535−546.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.
Milyavsky, M., & Gvili, Y. (2024). Advice taking vs. combining opinions: Framing social information as advice increases source's perceived helping intentions, trust, and influence. Organizational Behavior and Human Decision Processes, 183, 104328.
Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2022). Explainable artificial intelligence: a comprehensive review. Artificial Intelligence Review, 55(5), 3503−3568.
Morrison, E. W. (2011). Employee Voice Behavior: Integration and Directions for Future Research. Academy of Management Annals, 5(1), 373–412.
Mowbray, P. K., Wilkinson, A., & Tse, H. H. (2022). Strategic or silencing? Line managers' repurposing of employee voice mechanisms for high performance. British Journal of Management, 33(2), 1054−1070.
Mussweiler, T. (2003). Comparison processes in social judgment: mechanisms and consequences. Psychological review,110(3), 472−489.
Na-Nan, K., & Sanamthong, E. (2020). Self-efficacy and employee job performance: Mediating effects of perceived workplace support, motivation to transfer and transfer of training. International Journal of Quality & Reliability Management, 37(1), 1−17.
Nelson, G. S. (2019). Bias in artificial intelligence. North Carolina medical journal, 80(4), 220−222.
Newby, J., Mason, E., Kladnistki, N., Murphy, M., Millard, M., Haskelberg, H., ... & Mahoney, A. (2021). Integrating internet CBT into clinical practice: a practical guide for clinicians. Clinical Psychologist, 25(2),
Ng, T. W., & Feldman, D. C. (2012). Employee voice behavior: A meta‐analytic test of the conservation of resources framework. Journal of Organizational behavior, 33(2), 216−234.
Ng, T. W., Wang, M., Hsu, D. Y., & Su, C. (2022). Voice quality and ostracism. Journal of Management, 48(2), 281−318.
Nishant, R., Schneckenberg, D., & Ravishankar, M. N. (2024). The formal rationality of artificial intelligence-based algorithms and the problem of bias. Journal of Information Technology,39(1), 19−40..
Oksanen, A., Cvetkovic, A., Akin, N., Latikka, R., Bergdahl, J., Chen, Y., & Savela, N. (2023). Artificial intelligence in fine arts: A systematic review of empirical research. Computers in Human Behavior: Artificial Humans, 1(2), 100004.
Parke, M. R., Tangirala, S., Sanaria, A., & Ekkirala, S. (2022). How strategic silence enables employee voice to be valued and rewarded. Organizational Behavior and Human Decision Processes, 173, 104187.
Perry-Smith, J. E., & Mannucci, P. V. (2017). From creativity to innovation: The social network drivers of the four phases of the idea journey. Academy of management review, 42(1), 53−79.
Pescetelli, N., & Yeung, N. (2021). The role of decision confidence in advice-taking and trust formation. Journal of Experimental Psychology: General, 150(3), 507−520.
Pfrombeck, J., Levin, C., Rucker, D. D., & Galinsky, A. D. (2023). The hierarchy of voice framework: the dynamic relationship between employee voice and social hierarchy. Research in Organizational Behavior,
Popelnukha, A., Almeida, S., Obaid, A., Sarwar, N., Atamba, C., Tariq, H., & Weng, Q. (2022). Keep your mouth shut until I feel good: testing the moderated mediation model of leader's threat to competence, self-defense tactics, and voice rejection. Personnel Review, 51(1), 394-431.
Qin, X., Zhou, X., Chen, C., Wu, D., Zhou, H., Dong, X., ... & Lu, J. G. (2025). AI aversion or appreciation? A capability–personalization framework and a meta-analytic review. Psychological bulletin,151(5), 580−599.
Ramaul, L., Ritala, P., & Ruokonen, M. (2024). Creational and conversational AI affordances: How the new breed of chatbots is revolutionizing knowledge industries. Special Issue: Written by ChatGPT, 67(5), 615–627.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.
Rizzo, C., Bagna, G., & Tuček, D. (2024). Do managers trust AI? An exploratory research based on social comparison theory. Management Decision.
Ruan, Y., Le, J. D. V., & Reis, H. T. (2024). How can I help?: Specific strategies used in interpersonal emotion regulation in a relationship context. Emotion (Washington, D.C.), 24(2), 329–344.
Sachin, P. K., & Schecter, A. (2024). Advice utilization in combined human-algorithm decision-making: An analysis of preferences and behaviors. Journal of the Association for Information Systems,25(6), 1439-1465.
Schemmer, M., Bartos, A., Spitzer, P., Hemmer, P., Kühl, N., Liebschner, J., & Satzger, G. (2023). Towards effective human-AI decision-making: The role of human learning in appropriate reliance on AI advice. arXiv preprint arXiv:2310.02108.
Schmitt, A.; Zierau, N.; Janson, A.; & Leimeister, J. M. (2021). Voice as a contemporary frontier of interaction design. ECIS 2021 Research Papers. 143.
Schoenherr, J. R., & Thomson, R. (2024). When AI fails, who do we blame? Attributing responsibility in human–AI interactions. IEEE Transactions on Technology and Society, 5(1), 61−70.
Shamim, S., Zeng, J., Khan, Z., & Zia, N. U. (2020). Big data analytics capability and decision making performance in emerging market firms: The role of contractual and relational governance mechanisms. Technological Forecasting and Social Change, 161, 120315.
Shaver, K. G. (2016). Introduction To Attribution Processes. Routledge.
Shrestha, Y. R., Ben-Menahem, S. M., & Von Krogh, G. (2019). Organizational decision-making structures in the age of artificial intelligence. California management review, 61(4), 66−83.
Siau, K., & Wang, W. (2020). Artificial intelligence (AI) ethics: ethics of AI and ethical AI. Journal of Database Management, 31(2), 74-87.
SimanTov-Nachlieli, I. (2025). More to lose: The adverse effect of high performance ranking on employees' preimplementation attitudes toward the integration of powerful AI aids. Organization Science, 36(1), 1-20.
Sturm, T., Pumplun, L., Gerlach, J. P., Kowalczyk, M., & Buxmann, P. (2023). Machine learning advice in managerial decision-making: The overlooked role of decision makers' advice utilization. The Journal of Strategic Information Systems, 32(4), 101790.
Tai, K., Keem, S., Lee, K. Y., & Kim, E. (2024). Envy influences interpersonal dynamics and team performance: Roles of gender congruence and collective team identification. Journal of Management, 50(2), 556-587.
Tangirala, S., & Ramanujam, R. (2008). Exploring nonlinearity in employee voice: The effects of personal control and organizational identification. Academy of Management Journal, 51(6), 1189−1203.
Tilton, Z., LaVelle, J. M., Ford, T., & Montenegro, M. (2023). Artificial intelligence and the future of evaluation education: Possibilities and prototypes. New Directions for Evaluation, 2023(178–179), 97–109.
Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42(9), 1600–1631.
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5(4), 297–323.
Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review, 101(4), 547–567.
Van den Heuvel, M., Demerouti, E., & Peeters, M. C. (2015). The job crafting intervention: Effects on job resources, self‐efficacy, and affective well‐being. Journal of occupational and organizational psychology,88(3), 511−532.
Van, F., & Jenna, A. (2021). The Effect of a Partner's Work Success on Emotions and Motivation: A Social Comparison Process.
Van Kleef, G. A. (2009). How emotions regulate social life: the emotions as social information (EASI) model. Current Directions in Psychological Science, 18(3), 184–188.
Wang, Y. Y., & Chuang, Y. W. (2024). Artificial intelligence self-efficacy: Scale development and validation. Education and Information Technologies, 29(4), 4785−4808.
Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological review,92(4), 548−573.
Wesche, J. S., Hennig, F., Kollhed, C. S., Quade, J., Kluge, S., & Sonderegger, A. (2022). People's reactions to decisions by human vs. algorithmic decision-makers: The role of explanations and type of selection tests. European Journal of Work and Organizational Psychology, 33(2), 146−157.
Wibberley, G., & Saundry, R. (2016). From representation gap to resolution gap: Exploring the role of employee voice in conflict management. In Reframing Resolution: Innovation and Change in the Management of Workplace Conflict (pp. 127-148). London: Palgrave Macmillan UK.
Wiesche, M., Pflügler, C., & Thatcher, J. B. (2024). The impact of social comparison on turnover among information technology professionals. Journal of Management Information Systems, 41(1), 297−324.
Williams, S. H. (2020). AI advice: the irony of big data disclosures and the new advice paradigm. SSRN Electronic Journal.
Wolsink, I., Den Hartog, D. N., Belschak, F. D., & Sligte, I. G. (2019). Dual cognitive pathways to voice quality: Frequent voicers improvise, infrequent voicers elaborate. PLoS One, 14(2), e0212608.
Wood, J. V. (1996). What is social comparison and how should we study it? Personality and Social Psychology Bulletin, 22(5), 520−537.
Xavier, D. F., & Korunka, C. (2025). Integrating artificial intelligence across cultural orientations: A longitudinal examination of creative self-efficacy and employee autonomy. Computers in Human Behavior Reports,18,
Yang, C., Bauer, K., Li, X., & Hinz, O. (2025). My advisor, her AI, and me: Evidence from a field experiment on human–AI collaboration and investment decisions. Management Science. Advance online publication. https://doi.org/10.1287/mnsc.2022.03918
Yaniv, I. (2004). Receiving other people's advice: Influence and benefit. Organizational Behavior and Human Decision Processes, 93(1), 1−13.
You, S., Yang, C. L., & Li, X. (2022). Algorithmic versus human advice: does presenting prediction performance matter for algorithm appreciation?. Journal of Management Information Systems, 39(2), 336−365.
Yue, B., & Li, H. (2023). The impact of human-AI collaboration types on consumer evaluation and usage intention: a perspective of responsibility attribution. Frontiers in psychology, 14, 1277861.