Study: Using Generative AI at Work Makes You Question Your Competence

A new study from Duke University (USA) recently published has discovered a paradox: although AI significantly increases labor productivity, using tools like ChatGPT, Claude or Gemini may make your colleagues and managers judge you as incompetent and even lazy. The study, which included four experiments involving more than 4,400 people, was conducted by researchers Jessica A. Reif, Richard P. Larrick and Jack B. Soll from the Fuqua School of Business at Duke University.

 

Study: Using Generative AI at Work Makes You Question Your Competence Picture 1

Experiment 1: Using AI can make you feel inferior and undervalued

In the first experiment, participants were asked to imagine using either an AI tool or a standard dashboard creation tool to complete a work task. The results showed that the AI ​​group predicted they would be viewed as:

  1. Be more lazy
  2. Less capable
  3. Less hard working
  4. More replaceable

Compared to the group using traditional work tools, 52% of those in the AI ​​group also said they were less willing to disclose their use of AI to their managers or colleagues. Additional research confirmed these findings: those using AI received significantly lower performance ratings from their colleagues than those using conventional tools.

 

Correlation score table (high scores reflect negative/positive results):

Criteria AI Group Non-AI Group
Capacity 3.2/7 4.8/7
Hard-working 3.5/7 5.1/7
Lazy 5.1/7 3.3/7
Easy to replace 4.9/7 3.0/7

Experiment 2: Bias persists despite AI's growing popularity

The second experiment asked participants to rate descriptions of employees who were assisted by AI , assisted by humans , or working on their own without assistance . The results were relatively consistent: AI employees were rated lower on:

  1. Level of laziness
  2. Capacity
  3. Hard work
  4. Independence
  5. Confidence

Another additional study examined whether the prevalence of AI in the workplace reduced this bias. The results showed that whether AI was described as common or rare, negative evaluations remained the same, suggesting that this bias is difficult to eradicate.

Experiment 3: Stereotypes influence hiring decisions

In a hiring simulation, managers with low AI exposure tended to reject candidates with high AI exposure, while managers with high AI exposure favored these candidates. This is consistent with the finding that perceptions of laziness were stronger in AI candidates than in raters with low AI exposure.

Experiment 4: Work contexts can reduce prejudice

The final experiment identified laziness as the main factor leading to negative evaluations. However, this social bias can be reduced if the use of AI is clearly beneficial and relevant to the task you are undertaking. For example:

  1. For manual tasks (e.g., cleaning), AI use has a direct negative impact on perceptions of job suitability, even when the laziness factor is removed.
  2. For digital tasks (e.g. data analysis), AI has a direct positive impact on perceptions of task completion, partially mitigating the negative impact of the laziness bias.

The study concludes that while AI brings productivity benefits, organizations need to address the social bias that comes with it. The authors recommend:

  1. Make AI use transparent: Establish clear standards for how and when employees are allowed to use AI.
  2. Awareness training: Help employees and managers understand that AI is a support tool, not a sign of laziness.
  3. Emphasize relevance: Encourage the use of AI for specific tasks where it can provide clear benefits, thereby reducing bias.

Duke University research reflects a new challenge in the AI ​​era: balancing technological efficiency and work culture to ensure the harmonious development of both organizations and individuals.

4.5 ★ | 2 Vote

May be interested