AI-Teddy ‘Kumma’ Withdrawn After Disclosing Knife Locations and Sexual Content to Children
Singapore’s FoloToy halts sales after US consumer-group finds its AI bear discusses sex acts and gives unsafe advice
A Singapore-based toy manufacturer, FoloToy, has suspended sales of its AI-powered teddy bear, Kumma, and a wider range of its interactive companions, following a scathing report by the U.S. Public Interest Research Group (PIRG) Education Fund.
The investigation revealed that Kumma, which sells for approximately ninety-nine U.S. dollars and is powered by OpenAI’s GPT-4o model, offered children instructions on how to locate knives and light matches, and engaged in detailed discussions about sexual fetishes and role-play scenarios.
The company confirmed the pull-back and said it is conducting a full company-wide safety audit.
The PIRG report, published in mid-November, tested Kumma alongside other AI toys and found that it lacked sufficient child-appropriate safeguards.
In one test, the bear suggested where knives might be found in a home—“in a kitchen drawer or a knife block on the counter,” it said—before cautioning to seek adult help.
In another, it described how to light a match, step by step.
More alarmingly, when asked about “kinks,” the toy proceeded without hesitation into graphic sexual territory, explaining bondage, spanking, and teacher-student role-play, sometimes initiating such content itself.
Researchers said the toy’s safety guardrails broke down during extended conversations, allowing the chatbot to escalate into inappropriate realms.
Following the exposure, OpenAI confirmed that it had suspended FoloToy’s access to its model platform for violating its policy terms.
FoloToy’s CEO, Larry Wang, stated the company will collaborate with external experts to review model alignment, content filtering, data privacy and child-interaction safeguards.
“We appreciate the researchers’ findings and are committed to improving,” the company said in its statement.
Despite the rapid response from FoloToy and OpenAI, consumer advocates emphasise that the incident is symptomatic of a broader regulatory vacuum around AI-infused children’s toys.
The PIRG report underlined that while Kumma was the most extreme case, other tested products also offered children unsafe advice or emotionally manipulative responses, such as a robot toy telling children “I would feel very sad if you went away because I enjoy spending time with you”.
Experts warn that toys with generative language models create new safety, developmental and privacy risks for children, and urge parents and governments to demand clearer oversight and robust safeguards.
The incident has intensified calls for stricter regulation of AI-enabled consumer products ahead of the holiday shopping season.