Does ChatGPT Really Make Us 'Lazier' and 'Less Intelligent'?

Since its launch in 2022, ChatGPT and its AI-powered tools have quickly infiltrated work, learning, and personal life, accelerating research, content creation, and more at an unprecedented pace.

 

It's no surprise that generative AI is gaining traction, even faster than the Internet or PCs ever were. But experts warn: with opportunity comes risk.

One of the leading experts speaking out on the issue is Natasha Govender-Ropert, Head of AI for Financial Crime at Rabobank.

When the brain relies on AI

A recent MIT study has sparked headlines like 'ChatGPT Could Be Making Our Brains Stupid' and 'AI Is Making Humans Lazier and Stupid' . But what is the reality?

Researchers divided 54 Boston college students into three groups to write essays:

  1. Group 1 used only the brain (no supporting tools).
  2. Group 2 uses Google.
  3. Group 3 uses ChatGPT.

Brain wave measurements showed that the 'brain only' group had the highest level of neural connectivity; the ChatGPT group had the lowest. It seems that AI puts writers on 'autopilot', requiring less brain power.

In the fourth round, the groups switched roles: those who were used to writing by hand were given ChatGPT, while the AI ​​group had to write by hand. The result? The group that had previously used their brains improved the quality of their writing, while the group that was used to using AI… fumbled and forgot what they had written.

Summary of 4 months of testing: the 'bare brain' group is superior in terms of neurological, linguistic and behavioral aspects. Meanwhile, the AI ​​group often copies and pastes quickly, and their writings are marked by teachers as lacking 'original thinking' and 'soul'.

 

It sounds worrying, but the study actually doesn't talk about 'brain rot' but emphasizes the habit of taking shortcuts . If AI is abused, people can easily become lazy and gradually become more passive. But if used consciously and proactively, this risk can be completely avoided.

The problem lies in critical thinking.

Interestingly, the researchers themselves complained about… media reporting. They even set up a FAQ website to emphasize: the results are only suggestive, small-scale, and have not been peer-reviewed, and no absolute conclusions can be drawn.

So in the end, what 'kills the brain' is not necessarily ChatGPT, but it could be the TL;DR titles to attract views .

Experts at the Vrije Universiteit Amsterdam warn: the biggest risk is not memory loss, but a decline in critical thinking . When students and users rely too much on the 'wise' tone of AI, they are less likely to dig deeper, less likely to question, and more likely to miss out on obscured perspectives.

Here's the core problem: if we accept AI output as truth, we inadvertently feed into the biases and false assumptions inherent in the data.

A Fairer Society Thanks to AI – But Be Smart

According to Natasha Govender-Ropert, there is no fixed definition of 'bias'. "What I consider bias may be completely different to you." Therefore, it is necessary to have a clear set of principles for assessing and eliminating bias in data.

Social norms change, while the data that trains AI does not. If AI is to serve fairly and responsibly, humans need not only better technology, but also a habit of being critical and sober about all information – whether it comes from the press or from machines.

Update 25 August 2025
Category

System

Mac OS X

Hardware

Game

Tech info

Technology

Science

Life

Application

Electric

Program

Mobile