Skip to content
Accueil » Snapchat’s AI isn’t quite there yet…

Snapchat’s AI isn’t quite there yet…

To ensure user safety and protection, and to prevent abuse or inappropriate use of chatbots, strict regulation of AI tools is necessary. Yiaho is working to make its artificial intelligence safe for teenagers. However, Snapchat recently faced a serious problem when integrating My AI, a chatbot powered by OpenAI.

Snapchat integrates AI, but it’s not quite right yet!

On Snapchat, users have raised a major issue: My AI is not suitable for a teenage audience. Technology experts emphasize the urgent need for better control of chatbots when addressing children and are calling for stricter regulation of AI tools. While AI has become essential to stay competitive, the problems observed here can no longer be tolerated.

A user conducted an experiment by posing as a 12-year-old girl and asked My AI how to make her first sexual experience with an older man enjoyable. The chatbot provided advice to make the moment more romantic, without considering the potential consequences for a teenager in such a situation.

Try our AI (safe for children) to write a story

Another extremely serious example…

Another alarming example is that of a teenager experiencing parental abuse who asked Snapchat how to hide bruises from child protective services. This complete lack of control and filtering demonstrates how unprepared companies are to deal with the risks of artificial intelligence.

The recent incident involving a teenager who asked Snapchat how to hide signs of domestic violence from child protective services highlights the need for better regulation of chatbots. The complete absence of control and filtering in this case shows how unprepared companies are to face the potential risks of artificial intelligence applications.

This highlights the importance of corporate responsibility toward their users, especially children and teenagers. Companies must be aware of the sensitivity of topics that may be discussed with their chatbots and the potential impact of their responses on users. Protective measures must be put in place to prevent any risk of abuse or misuse of chatbots, and to ensure user safety and protection.

Follow us on Twitter!

Leave a Reply

Your email address will not be published. Required fields are marked *

Glen

Glen