Many people want AI content on social media to be clearly 'labeled' – a situation that needs to be taken seriously.
Just a few minutes of browsing social media is enough to see that AI is everywhere. However, we don't always realize that the content is created by AI. And this is becoming a major problem as frustration with 'junk AI' is growing.
From lifeless photos and bizarre videos to seemingly fluent but shallow texts, AI-generated content has become ubiquitous across most platforms. According to an exclusive CNET survey, 94% of American adults who use social media believe they have encountered AI-generated or edited content. However, only 44% said they were confident in distinguishing between real photos and videos and AI-generated content.
This highlights the significant gap between perception and the ability to recognize reality.
Most people lack confidence in identifying AI content.
In the age of AI, "seeing is believing." Tools like OpenAI's Sora or Google's new image generation models can create incredibly realistic images and videos. Chatbots can also write text so smoothly that it's hard to tell they're machine-generated.
Therefore, it's not surprising that 25% of American adults admit they lack confidence in distinguishing between real and AI-generated content. This lack of confidence is even more pronounced among older generations, with 40% of Baby Boomers and 28% of Gen X reporting feeling uncertain. Those with less exposure to or understanding of AI tend to have more difficulty accurately identifying it.
Approximately 72% of those surveyed said they take action to verify whether the content they see is AI-generated, with Gen Z having the highest verification rate (84%).
The most common method is to carefully observe images and videos for anomalies. Approximately 60% of users follow this approach. Previously, details like extra fingers or recurring errors in videos were clear signs of AI content. However, newer models have significantly improved and rarely make those 'silly' mistakes anymore, making identification more difficult.
Beyond visual observation, 30% of users search for labels or notifications revealing AI-generated content, while 25% search for that content on other sources such as newspapers or using reverse image search tools. Only 5% said they use dedicated deepfake detection tools.
Notably, 25% of American adults do nothing to verify the authenticity of content. This rate is highest among Baby Boomers (36%) and Gen X (29%). This is concerning given the increasing use of AI for scams and fraud.
51% want clearer labeling for AI.
Among the 2,443 social media users surveyed, 51% believe there is a need for better AI-generated or edited content, including deepfakes. The highest support came from Millennials (56%) and Gen Z (55%).
In addition, 21% believe that AI content should be completely banned on social media, without exception. This percentage is highest among Gen Z (25%). When asked whether AI content should be allowed to exist but subject to strict regulations, 36% agreed.
One reason for this may be that only 11% of those surveyed believe AI content provides significant value such as entertainment, information, or usefulness, while 28% believe it offers little to no value.
How can we limit AI content and detect deepfakes?
The best defense against AI is still clear thinking. If content looks too perfect, too strange, or 'too good to be real,' it's highly likely it's not.
Users can utilize deepfake detection tools, starting with those from the Content Authenticity Initiative – which support various file formats. Additionally, verifying the uploading account is crucial. Accounts that frequently publish AI-generated content often have chaotic feeds, lack consistency, have little genuine interaction, or post numerous suspicious links.
To limit AI content in their feeds, users can utilize filters available on platforms like Pinterest, or adjust AI settings on Instagram, Facebook, and other services. When encountering unwanted content, marking it as 'not interested' also helps the algorithm understand and reduce the display of similar content.
However, even with these measures in place, being outsmarted by AI is sometimes unavoidable. Until a universal and effective AI detection system is in place, users must still rely on existing tools and proactively learn from and be wary of each other.