5 ways Generative AI is ruining social networks
Generative AI continues to create a stir, especially as social media platforms are scrambling to deploy this technology. Here are a few of those disadvantages.
1. A wave of AI Slop
If you are a regular user of platforms like Facebook, you may have come across AI Slop. AI slop is garbage output from Generative AI, including surreal artwork and meaningless recipes. Nowadays, there is even an X account dedicated to AI slops on Facebook.
AI slops often appear from viral spam sites. And since Facebook recommends viral content, this means you'll probably see more nonsense content than usual in your feed. Generative AI has made it easier than ever to create these types of spam posts, and as a result, content is literally flooding social media platforms.
2. Loss of more authenticity
Social networks don't really stand out in terms of authenticity. Influencers often show glimpses of their lives, mostly for the purpose of promoting a perfect image or selling a product they are paid to promote.
But AI takes this lack of realism even further. According to SocialMediaToday, TikTok is testing virtual influencers, allowing businesses to advertise to users with digitally created avatars. Meanwhile, Instagram is testing a feature that allows influencers to create their own AI bots to reply to fans in messages. The test was announced by Meta CEO Mark Zuckerberg on his Instagram broadcast channel.
This dilution of little authenticity on social media doesn't just concern influencers. AI is being used to create posts for users on social networks like Reddit and X (Twitter). While chatbots that can do this have existed for a while, the introduction of large language models has made it more difficult to distinguish between real and fake. Every day there are users accusing others of using AI to write posts or stories.
3. Mistakes on social networks caused by AI
AI platforms from social media companies are still a work in progress, which means they still make mistakes. Some of these errors contribute to misinformation or reduce user trust in the platform.
For example, Meta AI responded to a post in the Facebook group, claiming that it was a parent enrolled in a program for gifted students, as reported by Sky News. Luckily, because Meta AI's response is clearly identified on Facebook, users can tell that it's not real. But this raises questions about the reliability of AI tools like Meta AI inserting themselves into conversations where they have no place.
Meanwhile, Grok AI (X's AI chatbot) has been criticized for giving out false information. One of these cases includes accusations that NBA player Klay Thompson vandalized a house after an AI misinterpreted basketball slang for shots that didn't go in the basket as "bricks."
While some AI illusions are funny, other false positives are more disturbing and lead to real-world consequences.
4. Realize the Dead Internet Theory
The Dead Internet theory is the idea that the majority of content on the Internet is created by bots. While this was easy to ridicule in the past, it seems increasingly more plausible as social media platforms are flooded with AI spam.
The fact that social media companies also integrate bots as users makes this idea more realistic than before. It even led to the launch of Butterflies AI, a social media platform where some users are actually just AI bots. While bots can be useful on social media, having them pretend to be other users isn't very appealing.
In terms of everyday user experience, Generative AI has made it easier for spam bots to imitate real users. Distinguishing real users from bots is becoming more difficult.
5. People need to find ways to protect their content from being crawled by AI
Users are finding different ways to protect their content from being used in AI data sets. But things are not always as simple as refusal. If your posts are public, chances are they were used to train the AI.
Therefore, users are trying alternative solutions to protect their data. This includes switching to a private profile as well as data poisoning. While using Nightshade to "poison" artwork does not affect users viewing images, other forms of data poisoning can affect the content we see on social networks .
If more users switch to private profiles on more public social networks, it will become more difficult to discover users and content you like. And as artists move away from platforms that provide training data to Generative AI, those who just want to admire their work will miss out unless they switch to the appropriate platforms.
Although there are some applications for Generative AI on these platforms, many people believe that we don't really need Generative AI on social networks. But no matter what we think, Generative AI has fundamentally changed social media in many ways.
You should read it
- What is Generative AI?
- 5 potential negative health effects of Generative AI technology
- How to use Photoshop's Generative Fill to change clothes for people in photos
- 8 Ways Artists Can Really Benefit from Generative AI
- Turn photos into works of art using Generative Fill in Photoshop
- How to use Nightshade to protect your artwork from Generative AI
- 10 ways to use Generative Fill to improve photos in Photoshop
- 5 Generative AI features you can use on Snapchat
- Why is Generative AI not good for gaming?
- Rushing to invest in generative AI because of FOMO?
- How to enable/disable application access to Generative AI in Windows 11
- Photoshop beta 25.0 supports creating AI images in Vietnamese
Maybe you are interested
How to optimize photo collages for social media and printing
Why is Instagram still a popular social network?
5 Ways to Cut Down on Social Media Usage
7 things you should not share on social network Facebook
Should you use third-party apps for social media platforms?
Why are social media still flooded with scam ads? How to detect them?