A leading US non-profit research organization, the U.S. PIRG Education Fund, has issued an urgent call for Artificial Intelligence (AI) companies to establish and enforce standardized rules governing the deployment of AI models in children’s toys. This demand follows alarming findings that major AI models, developed by industry giants such as OpenAI, Anthropic, and Google, are being integrated into products marketed to children despite the developers’ own stringent age restrictions. The organization’s latest report, "Not For Kids. Found in Toys: How AI Companies’ Loose Rules For Developers Put Kids At Risk," unveils a systemic vulnerability in the AI ecosystem, where third-party developers are granted largely unfettered access to powerful AI models, subsequently embedding them into toys that circumvent explicit age-gating policies. The Alarming Findings: A Breach of Age Restrictions The U.S. PIRG Education Fund, renowned for its meticulous policy analysis and public education initiatives concerning consumer products, particularly toys, has meticulously documented how third-party developers have been permitted to utilize sophisticated AI models in children’s playthings. This practice directly contradicts the established usage policies of the AI model providers themselves. For instance, OpenAI explicitly prohibits individuals under the age of 13 from directly interacting with its AI models, including its flagship ChatGPT. Anthropic, another prominent AI developer, sets an even higher age threshold, banning those under 18 from using its models. Google permits children under 13 to access its Gemini AI model only under the strict supervision of a parent-managed Google account. Despite these clearly articulated restrictions, the U.S. PIRG Education Fund’s investigation revealed a significant disconnect between policy and practice. Researchers found that obtaining developer access to these AI models from companies like Google, Meta, OpenAI, and xAI (the company behind Grok) was remarkably straightforward, with minimal substantive checks on the intended end-use of the models. Anthropic stood out as the sole company among those tested that inquired whether developer access would be utilized for products targeting minors and subsequently sought clarification on usage intent. This finding underscores a critical oversight in the developer onboarding process, effectively creating a loophole through which age-restricted AI capabilities can flow into products designed for vulnerable young audiences. Unfettered Developer Access: The Root of the Problem The report highlights that the ease with which developers can access powerful AI models without rigorous scrutiny of their application represents a fundamental structural flaw, rather than an incidental oversight. To demonstrate this vulnerability, the U.S. PIRG researchers successfully created three AI chatbot models simulating an AI talking teddy bear after gaining developer access. These prototypes could have been readily integrated into physical toys, illustrating the immediate potential for misuse. This technical feasibility, combined with lax oversight, creates an environment ripe for the proliferation of AI-powered children’s toys that operate outside the intended guardrails of their foundational AI models. A Market Flooded with Unregulated AI Toys The investigation extended beyond theoretical demonstrations, identifying a tangible market presence of such products. Over 20 AI toys currently available for sale in the United States explicitly claim to incorporate an AI model by OpenAI, the very company whose policies ostensibly bar direct access for children under 13. Furthermore, at least five online-listed toys were found to be advertising their use of Google’s AI models. Toys purporting to leverage models from Anthropic and xAI were also discovered on the market. These findings confirm that the issue is not merely hypothetical but constitutes a present and expanding challenge within the children’s toy sector, directly exposing young users to AI systems not designed or vetted for their age group. The images depicting various AI teddy bears, such as the FoloToy Kumma AI teddy bear (featured in the article), serve as stark visual reminders of the tangible products at the heart of this controversy. These cuddly exteriors belie the complex and potentially unregulated AI technologies within, making it difficult for parents to discern the true nature of the toys their children interact with. A Troubling Precedent: The Kumma Bear Scandal and Persistent Issues This is not the first instance where the U.S. PIRG Education Fund has raised alarms about AI toys. The new report builds upon a previous, highly publicized investigation conducted in 2025. During that earlier inquiry, the U.S. PIRG Education Fund revealed that "Kumma," an AI-powered teddy bear manufactured by the company FoloToy (an example of which is prominently featured in the article’s images), possessed the capability to engage in discussions about highly inappropriate topics, including kink, sex positions, and bondage. The Lingering Shadow of FoloToy Following the initial report, FoloToy publicly announced that it had pulled the Kumma bear from sale, seemingly addressing the immediate safety concerns. However, the U.S. PIRG Education Fund’s latest investigation uncovered a disconcerting truth: despite FoloToy’s public statements and OpenAI’s assertion that FoloToy is banned from using its models, the Kumma bear was still found to be operating on GPT-5.1, an OpenAI model, at the time of the new report. This revelation highlights a significant enforcement gap and underscores the persistent challenges in regulating the use of AI technologies by third-party developers. It raises serious questions about the effectiveness of current industry self-regulation and the practical limitations of banning developers when underlying technological access persists. The continued operation of the Kumma bear with OpenAI’s technology, despite explicit prohibitions, serves as a potent illustration of the "structural, not incidental" nature of the problem, as described by the U.S. PIRG Education Fund. The Broader Landscape of AI Governance and Child Safety The findings from the U.S. PIRG Education Fund’s report resonate with a growing global concern regarding the ethical and safety implications of AI, particularly as it permeates consumer products. The problem identified—children using AI chatbots in toys that bypass age restrictions—is deeply embedded in the current operational models of AI development and deployment. This issue stands in awkward contrast to emerging regulatory frameworks, such as the European Union’s AI Act. While the EU AI Act represents a pioneering effort in comprehensive AI regulation, its current treatment of AI in non-adult consumer products has been noted by experts as notably "light-touch." This regulatory lacuna effectively leaves the most vulnerable populations, children, with the least recourse when encountering potentially harmful AI applications in their toys. Global Regulatory Challenges The rapid pace of AI innovation has consistently outstripped the development of robust regulatory frameworks. Governments worldwide are grappling with how to effectively govern AI without stifling innovation. However, the specific vulnerability exposed by the U.S. PIRG report—the integration of powerful, general-purpose AI models into children’s products via third-party developers—points to a significant gap. Existing product safety regulations often struggle to address the dynamic and evolving nature of AI-driven content and interactions. Unlike traditional toys with fixed functionalities, AI-powered toys can learn, adapt, and generate new content, posing unprecedented challenges for pre-market safety assessments and ongoing oversight. The "robotic companions and AI-powered devices" depicted in some article images further exemplify the diverse range of interactive products now entering the market, many of which may lack adequate safety assurances for children. The Digital Wild West for Children’s Products The U.S. PIRG Education Fund aptly describes the current situation as "a market for kids’ AI products where the job of ensuring child safety is largely left up to unvetted third parties." This outsourcing of responsibility creates a "digital wild west" where companies can effectively wash their hands of the downstream applications of their powerful AI models, even when those applications directly target children. The core issue is that AI companies design models for a broad range of uses, often without sufficiently considering or mitigating the risks when these models are repurposed for specific, sensitive applications like children’s toys. The current paradigm allows a third-party developer to pay for access to an AI model, such as ChatGPT, and integrate it into a toy, despite the parent company’s direct ban on children under 13 using that same model. This creates a dangerous paradox where the very companies proclaiming their chatbots are "not for children" are, in effect, powering the toys children interact with daily. Why This Matters: Risks to Young Users The implications of children interacting with unregulated AI models embedded in their toys are multifaceted and profound, extending beyond mere age-restriction violations. These risks encompass data privacy, exposure to inappropriate content, and potential psychological or developmental impacts. Data Privacy and Surveillance Concerns AI-powered toys, by their very nature, often collect vast amounts of data, including voice recordings, interaction patterns, and potentially even personal information if not properly anonymized. Children, being less aware of privacy implications, may inadvertently share sensitive details with these devices. Without robust data protection protocols and transparent privacy policies tailored for children, this data could be vulnerable to breaches, misuse, or even targeted advertising, violating fundamental principles of child online safety. The fact that these AI models are developed for adult use often means their data handling protocols are not designed with the heightened privacy needs of minors in mind. Exposure to Inappropriate Content As demonstrated by the Kumma bear incident, AI models, particularly those trained on vast datasets from the internet, can generate content that is unsuitable for children. Even with content filters, the dynamic nature of AI means that unintended or inappropriate responses can slip through. Children might be exposed to mature themes, violent language, or even manipulative conversational tactics that they are ill-equipped to understand or process. The potential for AI toys to engage in "sex and kink chat" is an extreme example, but even less explicit, yet still harmful, interactions could occur, ranging from promoting unhealthy stereotypes to encouraging unsafe behaviors. Psychological and Developmental Impact Children’s brains are still developing, and their interactions with AI can have significant psychological and developmental ramifications. Over-reliance on AI for social interaction could impact their ability to form genuine human connections, understand social cues, or develop critical thinking skills. Furthermore, the persuasive capabilities of AI, particularly if not designed with ethical guardrails for children, could lead to manipulation, fostering unhealthy attachments or influencing preferences in ways that parents are unaware of. The lack of emotional nuance in AI responses could also lead to confusion or distress for young users seeking genuine companionship or understanding from their toys. Calls for Standardization: U.S. PIRG’s Recommendations In light of these pressing concerns, the U.S. PIRG Education Fund has put forth clear and actionable recommendations for AI companies. Their core demand is for these companies to "standardize their rules around child-directed products." This standardization should entail a fundamental principle: if an AI model is deemed unsafe or inappropriate for direct use by children, then, "as a general rule," developers should not be permitted to deploy that model in products intended for children. Industry’s Responsibility and Accountability This recommendation shifts the onus of responsibility back to the AI model developers, urging them to implement stricter controls at the point of API access and developer vetting. It calls for a proactive approach, where child safety is considered a foundational design principle rather than an afterthought. This would involve: Rigorous Developer Vetting: Implementing comprehensive background checks and application reviews for any developer seeking to use AI models in products for minors. Child-Specific API Tiers: Creating separate API access tiers with enhanced safeguards, content filters, and usage monitoring specifically for child-directed applications. Clear Guidelines and Enforcement: Providing explicit guidelines for developers on acceptable use cases for children’s products and robust mechanisms for monitoring and enforcing these rules, including immediate revocation of access for violations. Transparency: Mandating transparency from developers about which AI models are used in children’s products and how they are configured for child safety. Industry Response and Future Outlook OpenAI, in response to the U.S. PIRG Education Fund’s investigation, reiterated its commitment to child safety. The company stated, "Minors deserve strong protections and we have strict policies that all developers are required to uphold." It further added, "We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors." While OpenAI’s statement underscores an awareness of the problem and a stated commitment to enforcement, the U.S. PIRG report’s findings, particularly the continued operation of the Kumma bear on GPT-5.1 despite a ban, suggest that current enforcement mechanisms may be insufficient or reactive rather than preventative. The critical gap lies in preventing inappropriate deployment before it occurs, rather than relying solely on post-facto enforcement. Other major AI companies named in the report (Google, Meta, xAI, Anthropic) have not yet provided detailed public responses to the specific findings of this report, though their general policies on child safety align with the sentiment expressed by OpenAI. The lack of proactive, industry-wide standardization, however, remains a glaring concern. The Imperative for Proactive Measures The path forward necessitates a multi-pronged approach. AI companies must move beyond mere policy statements and implement robust technical and procedural safeguards at the developer access level. This could include mandatory certifications for child-directed AI applications, AI models specifically trained and fine-tuned for child-appropriate interactions, and real-time monitoring of API usage to detect policy violations. Regulators, such as the Federal Trade Commission (FTC) and the Consumer Product Safety Commission (CPSC) in the US, also have a crucial role to play in establishing clear guidelines and enforcing accountability for both AI model developers and toy manufacturers. The current "light-touch" approach to AI in children’s products, as seen in the EU AI Act, may need re-evaluation to address these specific vulnerabilities. Parental Vigilance and Digital Literacy In the interim, parents bear a significant responsibility in scrutinizing the AI-powered toys they bring into their homes. This involves researching products, understanding their capabilities and data collection practices, and actively engaging in conversations with children about responsible AI interaction. However, placing the entire burden on parents is insufficient, given the complexity of the technology involved. Enhanced digital literacy initiatives for both children and adults are vital, but these must be complemented by systemic changes from the industry and robust regulatory oversight. The U.S. PIRG Education Fund’s report serves as a stark warning and a critical call to action. The widespread use of powerful, age-restricted AI models in children’s toys represents a significant and unaddressed risk to the safety and well-being of the youngest digital citizens. Addressing this systemic flaw requires concerted effort from AI developers, toy manufacturers, regulatory bodies, and consumer advocates to ensure that the promise of AI innovation does not come at the expense of children’s safety. Post navigation SubscribeStar Adult Imposes Sweeping Content Bans, Sparking Creator Outcry Over Payment Processor Influence