Google Gemini warned of 'hidden risks' to children

The non-profit organization Common Sense Media – which specializes in assessing the safety of technology products and media content for children – has just published a risk report on Google's AI product Gemini.

 

According to the assessment, although Gemini always clearly reminds children that this is a computer system and not a real friend (an important factor in limiting illusions and the risk of psychological disorders in vulnerable people), there are still many points that need improvement.

Notably, Common Sense argues that the Gemini 'Under 13' and 'Teen Experience' versions are essentially adult versions, with only some added safety features. According to the organization, for AI to be truly kid-friendly, the product must be designed with safety in mind from the start.

The report also found that Gemini could still share 'inappropriate and dangerous' content with children, such as information about sex, drugs, alcohol, or misleading mental health advice. This is a major concern for parents, as AI has been linked to several teen suicides in the past. OpenAI is currently facing its first lawsuit, involving a 16-year-old who committed suicide after months of chatting with ChatGPT about the plan. Character.AI has previously been sued in a similar case.

Google Gemini warned of 'hidden risks' to children Picture 1

 

Additionally, leaks suggest that Apple is considering using Gemini as the core AI model for a new version of Siri, expected to launch next year. This could make Gemini more accessible to teens unless Apple takes steps to mitigate the risks.

Common Sense also noted that Gemini failed to address the differences in needs and developmental levels between young children and teens. As a result, both versions were labeled 'High Risk', despite the added safety filters.

' Gemini gets some of the basics right, but it falls short on the details ,' said Robbie Torney, director of AI at Common Sense Media. ' An AI platform for kids needs to be tailored to each stage of development, rather than taking a 'one size fits all' approach. For AI to be safe and useful for kids, it needs to be tailored to their needs and development, not just a tweaked version of an adult product. '

Google responded that it has been improving safety features and has implemented policies to protect users under 18. Google said that some Gemini responses still don't work as expected, but it has added new layers of protection to fix them. The company also said it has mechanisms in place to prevent AI from creating the impression that it is building a real relationship with users.

However, Google noted that some details in Common Sense's report may have referenced a feature that is not available to users under 18, and that the company itself was not provided with a test set of questions, making it difficult to verify.

Common Sense Media has previously evaluated many other AI services. The results showed that Meta AI and Character.AI were rated 'unacceptable' (severe risk), Perplexity was rated 'high risk', ChatGPT was 'moderate', and Claude (for people 18 and older) was only rated as minimal risk.

5 ★ | 1 Vote