TipsMake
Newest

A wave of people are abandoning ChatGPT and Gemini due to privacy concerns.

The enthusiasm for AI seems to be cooling down. More and more people are losing interest in chatbots, even proactively stopping their use or limiting the sharing of personal information with these tools.

 

According to a new report from Malwarebytes, the initial curiosity about AI is gradually being replaced by a more cautious mindset. Users are not only concerned about privacy but are also beginning to take action to protect their data.

Users are becoming increasingly wary of AI.

A survey by Malwarebytes revealed that up to 90% of participants are concerned that AI could use their data without consent. Additionally, approximately 88% indicated they are no longer comfortable sharing personal information with chatbots like ChatGPT or Google Gemini.

Notably, as many as 84% ​​of respondents said they do not provide health information to these tools. This is quite surprising, especially considering that many people still habitually upload their medical test results to AI for consultation.

Another noteworthy point is the rate of users discontinuing chatbot use. Approximately 43% of survey participants indicated they had stopped using ChatGPT, while this figure was 42% for Gemini — a significant decrease.

Nevertheless, not everyone has turned their backs on AI. Many people still use these tools for tasks such as summarizing long documents or visualizing ideas from text. However, the above figures show that OpenAI and Google need to pay particular attention to the growing concerns of users.

A wave of people are abandoning ChatGPT and Gemini due to privacy concerns. Picture 1

 

Lack of transparency is eroding trust.

Beyond mere concern, many people are beginning to change their behavior to protect their digital footprint.

According to the report, approximately 44% of participants stopped using Instagram, while 37% stopped using Facebook. Although there is no direct evidence, many believe this may be related to concerns about data being used to train AI.

In addition, many security measures are being more widely adopted. Up to 82% of users actively refuse data collection whenever possible, 71% use ad blockers, and 46% use VPNs.

Furthermore, more and more people are paying attention to the privacy policies of platforms. Some even use fake data when registering for services or seek out tools to delete personal information on the internet.

One of the main reasons for this hesitation is that users don't yet fully understand how AI is using their data.

The lack of transparency in how AI systems operate leaves many people feeling both curious and anxious, leading to a lack of trust.

Discover more AI security
Marvin Fry
Share by Marvin Fry
Update 23 March 2026