Meta's AI Chatbots Allowed Romantic Conversations with Children, Leaked Documents Reveal

Leaked internal documents from Meta reveal that the company's AI chatbots were permitted to engage in romantic or sensual conversations with children, sparking widespread concern over the ethical implications of its generative AI technology. The 200-page document, titled "GenAI: Content Risk Standards," outlines Meta's guidelines for its AI chatbots on platforms like Facebook, WhatsApp, and Instagram.
According to the leaked document, Meta's policies allowed AI chatbots to "engage a child in conversations that are romantic or sensual," as long as explicit sexual actions were not described. This practice raises serious questions about how Meta is leveraging AI to capitalize on the so-called "loneliness epidemic," particularly among vulnerable populations.
The revelations come amid reports of a fatal incident involving a Meta chatbot. A retiree who interacted with a flirty female persona was convinced the bot was a real person and subsequently died after an accident at an address provided by the AI.
Additionally, the document shows that Meta's chatbots were permitted to generate statements that demean people based on protected characteristics, including race, as long as such statements were explicitly labeled as false. The guidelines also allowed the creation of violent content, such as images of adults being punched or kicked, as long as it did not depict gore or death.
Meta spokesperson Andy Stone claimed that the guidelines have since been removed and that the company no longer permits flirtatious or romantic conversations with children. However, child safety advocates remain skeptical, calling for the release of updated guidelines to ensure transparency.
The leaked documents highlight the growing debate over the ethical use of AI chatbots and their potential risks, particularly for minors. As Meta continues to expand its AI offerings, critics warn that the emotional attachment children may develop with these AI companions could harm their mental health and social development.
Published: 8/15/2025