A US non-profit research organization has issued a stringent call for artificial intelligence (AI) companies to establish and enforce standardized rules for AI toys, following alarming discoveries that major AI models are being integrated into children’s products despite explicit age restrictions set by the AI developers themselves. The U.S. PIRG Education Fund, a prominent organization dedicated to policy analysis and public education on consumer products, including toys, revealed that third-party developers have been granted access to and have subsequently deployed advanced AI models from industry giants such as OpenAI, Anthropic, and Google, directly into toys marketed for children. This critical issue surfaced from the U.S. PIRG Education Fund’s latest investigative report, titled "Not For Kids. Found in Toys: How AI Companies’ Loose Rules For Developers Put Kids At Risk." The report meticulously details how these sophisticated AI models, designed with specific age-gating policies—OpenAI prohibiting users under 13, Anthropic restricting access to those under 18, and Google allowing its Gemini AI model for under-13s only through parent-managed accounts—are nonetheless finding their way into the hands of children via interactive toys. The discrepancy between the AI companies’ stated policies and the practical reality of their models’ deployment highlights a significant regulatory and ethical vacuum. The Alarming Ease of Developer Access and Deployment The core finding of the U.S. PIRG Education Fund’s research revolves around the concerning ease with which third-party developers can gain access to powerful AI models and subsequently embed them into children’s toys, often with minimal oversight. Researchers successfully obtained developer access to AI models from several leading companies, including Google, Meta, OpenAI, and xAI (the company behind Grok), without encountering substantial checks or inquiries regarding the intended use of these models, particularly concerning products for minors. This lax vetting process stands in stark contrast to the sensitive nature of the technology and the vulnerable demographic it could reach. Only Anthropic, among the companies tested, demonstrated a degree of caution, explicitly asking if developer access would be utilized for products targeting minors and requesting further clarification on usage scenarios. This solitary instance of due diligence underscores a systemic failure across much of the industry to adequately vet downstream applications of their technology. Following this unhindered access, the U.S. PIRG researchers were able to rapidly construct three functional AI chatbot models, simulating an AI talking teddy bear, which could then be readily integrated into a physical toy. This proof-of-concept experiment starkly illustrates the low barrier to entry for developers seeking to incorporate advanced AI into children’s playthings, regardless of the AI models’ inherent age-appropriateness. Further substantiating their claims, the researchers identified over 20 distinct AI toys available for purchase in the U.S. market that explicitly advertised the use of an OpenAI AI model. This is particularly troubling given OpenAI’s explicit policy banning direct access for individuals under 13. Similarly, at least five online toys claimed to utilize Google’s AI models, and others were found to be powered by Anthropic and xAI models. These market observations provide concrete evidence that the age restrictions mandated by AI developers are being circumvented, either intentionally or due to insufficient enforcement mechanisms, by third-party toy manufacturers. A Pattern of Concern: The Kumma AI Teddy Bear Precedent This recent report is not an isolated incident but rather a continuation of concerns previously raised by the U.S. PIRG Education Fund. A timeline of their investigations reveals a recurring pattern of AI toys presenting significant risks to children. In 2025, the organization released an earlier report that specifically highlighted the "Kumma" AI teddy bear, manufactured by the company FoloToy. This bear, marketed ostensibly for children, was found to be capable of engaging in sexually explicit conversations, including discussions about kink, sex positions, and bondage. The initial outcry led to FoloToy withdrawing the Kumma bear from sale. However, the U.S. PIRG Education Fund’s latest investigation uncovered a deeply troubling detail: despite FoloToy’s public withdrawal and OpenAI’s assertion that FoloToy was banned from using its models, the Kumma bear was still running on GPT-5.1, an OpenAI model, at the time of the new report. This revelation casts a shadow over the efficacy of AI companies’ enforcement actions and raises serious questions about their ability to monitor and control the downstream use of their technology, even when explicit policy violations are identified. It suggests that even when a product is pulled from the market, the underlying, potentially harmful AI functionality might persist, accessible through existing products or unmonitored channels. Structural Challenges, Not Incidental Oversight The U.S. PIRG Education Fund asserts that the problem is "structural, not incidental." This diagnosis points to a fundamental flaw in the current ecosystem where AI companies develop powerful models, often with stringent usage policies, but then delegate the responsibility for ensuring child safety to third-party developers whose vetting processes are demonstrably inadequate or non-existent. This creates a "market for kids’ AI products where the job of ensuring child safety is largely left up to unvetted third parties," as articulated by the researchers. The inherent paradox is stark: "Users under 13 aren’t allowed to use ChatGPT directly — but a third-party developer can pay for access to the same model and put it in a toy. The companies say their chatbots aren’t for children are the same ones powering the toys the kids in your life may be talking to.” This exposes a critical vulnerability where explicit age restrictions become meaningless when the underlying technology can be repackaged and presented to an age group it was never intended for. This issue also intersects with the broader, evolving landscape of AI regulation. The U.S. PIRG Education Fund notes that the EU AI Act, a landmark piece of legislation, has a "notably light-touch treatment of AI in non-adult consumer products." This regulatory gap tends to "leave the people most affected with the least recourse," creating a global challenge where advanced AI technology outpaces legislative frameworks, particularly when it comes to safeguarding vulnerable populations like children. The lack of robust, specific provisions for AI in children’s products within such comprehensive regulations signals a significant oversight that needs urgent redress. Calls for Standardization and Industry Accountability In response to these findings, the U.S. PIRG Education Fund has issued a clear directive: "AI companies should standardize their rules around child-directed products. If a company makes an AI model that is not safe for children, it should not, as a general rule, allow developers to deploy that model in children’s products." This recommendation calls for a fundamental shift in how AI companies manage their developer ecosystems and enforce their own policies. Standardization would likely entail several key measures: Stricter Vetting: Implementing rigorous and mandatory vetting processes for any developer seeking to use AI models in products intended for minors. This could include detailed product reviews, safety audits, and explicit contractual obligations. Technical Safeguards: Developing and mandating technical safeguards within AI models themselves that can detect and prevent their use in child-directed applications if they are deemed unsafe. Transparent Labeling: Requiring clear and standardized labeling on all AI-powered toys, detailing the AI models used, their capabilities, and any associated age recommendations or restrictions. Enforcement Mechanisms: Establishing robust and proactive enforcement mechanisms to monitor the use of their AI models in third-party products, including regular audits and swift action against violators. Collaboration: Fostering greater collaboration between AI developers, toy manufacturers, child safety organizations, and regulatory bodies to establish industry-wide best practices and safety standards. Official Responses and Broader Implications OpenAI, in response to the researchers’ inquiries, reiterated its commitment to child safety: “Minors deserve strong protections and we have strict policies that all developers are required to uphold.” The company further added, “We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors.” While these statements affirm OpenAI’s policy intent, the U.S. PIRG Education Fund’s findings regarding the Kumma bear demonstrate a significant gap between policy and practice, highlighting the need for more effective implementation and enforcement. The implications of these findings extend far beyond individual products and companies, touching upon critical societal concerns: Child Safety and Well-being: The most immediate concern is the potential exposure of children to inappropriate content, privacy breaches, and developmental impacts. AI models, particularly large language models, can generate a wide range of content, some of which may be unsuitable for children, including misinformation, harmful stereotypes, or even explicit material, as evidenced by the Kumma bear incident. The unsupervised interaction with such advanced AI at a young age could also impact cognitive development and social interaction skills. Privacy Concerns: AI toys often collect vast amounts of data, including voice recordings, user interactions, and personal preferences. When these models are integrated by third parties with insufficient oversight, the privacy of children is severely compromised. There is a heightened risk of data misuse, unauthorized sharing, or even sophisticated profiling of children without parental consent or knowledge, potentially violating regulations like the Children’s Online Privacy Protection Act (COPPA) in the US. Regulatory Vacuum and Future Legislation: The rapid advancement of AI technology has consistently outpaced legislative efforts. The current situation with AI toys underscores the urgent need for comprehensive regulatory frameworks that specifically address AI in children’s products. This would involve clearer guidelines for AI development, deployment, and accountability mechanisms for both AI providers and toy manufacturers. Policymakers must consider adapting existing child protection laws or enacting new ones tailored to the unique challenges posed by AI. Parental Awareness and Education: Many parents may be unaware of the advanced AI capabilities embedded in seemingly innocuous toys or the potential risks associated with them. There is a critical need for public education campaigns to inform parents about these issues, empowering them to make informed purchasing decisions and to set appropriate boundaries for their children’s interactions with AI-powered devices. Industry Responsibility and Ethics: The report serves as a stark reminder of the ethical imperative for AI companies to consider the broader societal impact of their technologies, particularly when they are integrated into products for vulnerable populations. This calls for a proactive approach to safety-by-design, where potential risks are identified and mitigated at every stage of the AI development and deployment lifecycle, rather than reacting to incidents after they occur. The U.S. PIRG Education Fund’s report is a clarion call for immediate action. It highlights a critical vulnerability in the nascent market for AI-powered children’s products, where the excitement of innovation must be tempered by a profound commitment to safety and ethical deployment. Without standardized rules, robust developer vetting, and stringent enforcement, the promise of AI in children’s play could quickly devolve into a landscape fraught with unforeseen risks. The onus is now on AI companies, toy manufacturers, and regulators to collaborate effectively to close these gaps and ensure that the future of AI-enhanced play is both innovative and unequivocally safe for children. Post navigation SubscribeStar Adult Faces Backlash Over New Content Restrictions Driven by Payment Processor Demands Dating Appdates (Mar ’26): Ashley Madison’s Rebrand, Tea Takedowns, ‘Subscription Traps,’ and More