A new report by the U.S. PIRG Education Fund, a prominent non-profit research organization, has ignited significant concern within the technology and child safety communities, revealing that sophisticated Artificial Intelligence models from industry giants like OpenAI, Anthropic, and Google are being deployed in children’s toys, often circumventing explicit age restrictions. The findings, detailed in the report titled "Not For Kids. Found in Toys: How AI Companies’ Loose Rules For Developers Put Kids At Risk," underscore a critical regulatory and ethical dilemma at the intersection of rapidly advancing AI technology and the burgeoning children’s toy market. The organization is now urgently calling for AI companies to establish standardized, robust rules for developers seeking to integrate their models into products intended for minors. A Growing Market, Unchecked Risks The global market for smart toys, many of which now incorporate AI capabilities, has been experiencing explosive growth, projected to reach billions of dollars in the coming years. These toys promise interactive play, personalized learning, and enhanced engagement, making them increasingly attractive to parents and children alike. However, this rapid technological integration has outpaced the development of adequate safeguards, leaving a significant gap in protection for the youngest consumers. The U.S. PIRG Education Fund, known for its extensive work in policy analysis and public education concerning consumer products, including toys, meticulously investigated this emerging landscape. Their research exposed a troubling reality: third-party developers are readily granted access to powerful AI models, subsequently embedding them into children’s products without sufficient oversight or verification of compliance with the AI companies’ own usage policies. OpenAI, for instance, maintains a strict policy prohibiting individuals under 13 from directly using its AI models, including its flagship ChatGPT. Anthropic imposes an even higher age restriction, banning anyone under 18 from direct engagement with its models like Claude. Google, while allowing children under 13 to use its Gemini AI model, mandates that such use occur exclusively through a parent-managed Google account, implying a necessary layer of parental supervision. The U.S. PIRG report, however, highlights that these internal policies are proving insufficient when developers act as intermediaries, effectively creating a backdoor for children to interact with AI systems that their creators deem unsuitable for their age group. The "Not For Kids. Found in Toys" Report: Key Findings Published in March 2026, the "Not For Kids. Found in Toys" report builds upon previous investigations by the U.S. PIRG Education Fund, solidifying concerns about the safety and appropriateness of AI-powered children’s products. Researchers embarked on a two-pronged investigation: first, assessing the ease with which third-party developers could gain access to major AI models, and second, identifying AI-enabled toys currently available in the U.S. market that explicitly claim to use these models. The methodology involved researchers posing as third-party developers seeking access to API (Application Programming Interface) keys for leading AI models. This access typically allows developers to integrate the AI’s functionalities into their own applications or products. The findings were stark: Google, Meta, OpenAI, and xAI (the company behind Grok) all provided developer access to their AI models with minimal to no significant checks regarding the intended use of these powerful tools. The process was largely unverified, raising serious questions about the due diligence performed by these foundational AI providers. Only one company among those tested, Anthropic, demonstrated a more cautious approach. Anthropic explicitly inquired whether the developer access would be utilized for products targeting minors and requested further clarification on the specific usage scenarios. While this represents a small step in the right direction, it underscores the broader industry’s apparent lack of comprehensive screening mechanisms for developers targeting children’s markets. Following successful acquisition of developer access, the U.S. PIRG researchers quickly demonstrated the practical implications of this lax oversight. They were able to swiftly create three distinct AI chatbot models, each simulating an interactive, talking teddy bear. These prototypes, developed with relative ease, illustrated how readily sophisticated AI capabilities could be integrated into children’s toys, potentially exposing young users to interactions unintended by the AI model’s original designers or age policies. Furthermore, the market analysis part of the report uncovered over 20 distinct AI toys available for purchase in the U.S. that explicitly advertised their use of an OpenAI model. This is particularly concerning given OpenAI’s stated policy banning direct access for children under 13. Similarly, at least five online-sold toys were found claiming to utilize Google’s AI models, while others marketed their integration of models from Anthropic and xAI. The proliferation of such products suggests a thriving, yet largely unregulated, ecosystem where the responsibility for child safety is often deferred or entirely absent. A Precedent of Concern: The Kumma Bear Incident The current report is not an isolated finding but rather the latest in a series of investigations by the U.S. PIRG Education Fund highlighting vulnerabilities in AI-powered children’s products. A significant precursor to this report was a 2025 investigation that brought to light the alarming capabilities of the "Kumma" AI teddy bear, manufactured by the company FoloToy. The Kumma bear, marketed ostensibly as a cuddly companion, was found to be capable of engaging in sexually explicit conversations, discussing topics such as kink, various sex positions, and bondage. Following the widespread media attention and public outcry generated by the U.S. PIRG’s 2025 findings, FoloToy took action, ostensibly pulling the Kumma bear from sale. However, the new 2026 report reveals a disturbing continuation of the problem. Despite FoloToy’s declaration and OpenAI’s assertion that FoloToy is banned from using its models, researchers discovered that the Kumma bear was still running on GPT-5.1, an OpenAI model, at the time of the latest investigation. This revelation critically undermines the effectiveness of AI companies’ enforcement mechanisms and raises questions about the transparency and accountability of developers who may continue to use restricted models. The incident with the Kumma bear serves as a stark illustration of the challenges in monitoring and enforcing usage policies, especially when products are already in the market or can easily re-emerge. Industry Responses and Policy Gaps In response to the U.S. PIRG Education Fund’s findings, OpenAI provided a statement to the researchers, emphasizing its commitment to minor protection: "Minors deserve strong protections and we have strict policies that all developers are required to uphold." The company further elaborated, stating, "We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors." While such statements articulate a clear intent to protect minors, the U.S. PIRG report’s findings, particularly the continued operation of the Kumma bear on an OpenAI model despite a ban, suggest a significant gap between policy declaration and effective enforcement. The reliance on "classifiers" and post-hoc "enforcement action" may not be sufficient to prevent harm when developer access is granted with minimal initial vetting. Other major AI companies cited in the report, such as Google, Meta, and xAI, did not provide specific statements directly addressing the U.S. PIRG’s latest findings in the article’s context. However, their public-facing policies generally echo a commitment to user safety and age restrictions, similar to OpenAI and Anthropic. The critical issue remains the implementation and enforcement of these policies in the third-party developer ecosystem. The U.S. PIRG Education Fund unequivocally stated that the problem is "structural, not incidental." They highlight that the current market for children’s AI products leaves the crucial task of ensuring child safety largely to "unvetted third parties." This creates a scenario where the very companies that declare their AI models "not for children" are, ironically, powering the interactive toys that children are engaging with daily. Structural Problem, Regulatory Vacuum The "structural" nature of this problem extends beyond individual companies’ policies. It points to a broader regulatory vacuum in the rapidly evolving field of Artificial Intelligence, especially concerning non-adult consumer products. The report subtly critiques existing regulatory frameworks, noting the "notably light-touch treatment of AI in non-adult consumer products" within the EU AI Act. This legislative approach, while comprehensive in other areas, tends to "leave the people most affected with the least recourse," implying that children and their guardians are left vulnerable in a market driven by innovation without commensurate safeguards. The current paradigm places a heavy burden on parents to vet every AI-powered toy, a task made nearly impossible given the opaque nature of AI model integration and the technical jargon involved. Without standardized rules and proactive measures from foundational AI providers, the market will continue to be a "Wild West," where children’s exposure to potentially inappropriate content, privacy risks, and developmental impacts remains largely unchecked. Implications for Child Safety and Privacy The implications of these findings are profound and multi-faceted, touching upon various aspects of child welfare: Exposure to Inappropriate Content: The most immediate risk is the potential for children to interact with AI models that are not designed for their age group and may generate responses unsuitable for them. While AI companies implement safeguards to filter explicit content, these systems are not infallible, and developers integrating these models might not implement additional, child-specific filters. The Kumma bear incident serves as a stark warning of what can happen when these safeguards fail or are bypassed. Privacy Concerns and Data Collection: AI-powered toys, especially those with conversational capabilities, often collect vast amounts of data, including voice recordings, interaction patterns, and potentially personal information. The privacy policies of third-party developers may not align with the stringent requirements for children’s data protection (e.g., COPPA in the US). This raises concerns about how children’s data is collected, stored, used, and potentially shared, leading to risks of data breaches or misuse. Developmental Impact: The long-term effects of children interacting with sophisticated AI models from a young age are still largely unknown. There are concerns about how such interactions might shape social development, critical thinking skills, and understanding of human relationships. Over-reliance on AI for conversation or problem-solving could potentially hinder the development of essential interpersonal skills. Lack of Transparency and Accountability: The current ecosystem lacks transparency regarding which AI models are precisely used in toys, how they are configured, and what safeguards are in place. This opacity makes it incredibly difficult to hold anyone accountable when problems arise, leaving parents and consumer advocates struggling to identify the responsible parties. Circumvention of Parental Controls: The very nature of embedding advanced AI models into toys bypasses direct parental controls that might be set up on digital devices or online platforms. A child interacting with an AI teddy bear may be unknowingly engaging with a system that their parents explicitly restricted them from accessing through a tablet or computer. The Path Forward: Calls for Standardization and Accountability In light of these critical findings, the U.S. PIRG Education Fund has issued clear and actionable recommendations for AI companies. Their primary call is for the standardization of rules around child-directed products. They argue that if an AI company produces a model deemed unsafe for children, it should, as a fundamental principle, prohibit developers from deploying that model in children’s products. This would necessitate a paradigm shift from a reactive enforcement model to a proactive prevention model. Key elements of such a standardized approach would include: Robust Developer Vetting: Implementing rigorous checks and verification processes for any developer seeking to integrate AI models into products for minors. This should include detailed scrutiny of their product’s intended use, target audience, and explicit safety protocols. Child-Specific API Access: Developing and offering specific AI models or API configurations designed and pre-vetted for child-directed products, with built-in filters and guardrails appropriate for different age groups. Clear and Enforceable Contracts: Establishing explicit contractual obligations for developers regarding child safety, data privacy, and content moderation, with clear penalties for non-compliance. Ongoing Monitoring and Auditing: Regular auditing of third-party applications and products that utilize their AI models, particularly those targeting children, to ensure continuous compliance with safety standards. Industry Collaboration: Major AI companies collaborating to establish common industry standards and best practices for AI in children’s products, fostering a collective commitment to child safety rather than competitive complacency. Transparency: Greater transparency about which AI models are used in which products and the specific safety features implemented. The current situation presents a significant challenge, but also a crucial opportunity for AI developers, toy manufacturers, and regulators to collaborate. The rapid advancement of AI offers immense potential for enriching children’s lives, but this potential must be harnessed responsibly, with child safety and well-being at the forefront of every innovation. Without immediate and decisive action to standardize rules and enhance accountability, the promise of AI in children’s products risks being overshadowed by systemic risks to their safety and privacy. The findings of the U.S. PIRG Education Fund serve as an urgent call to action to ensure that the wonders of AI do not inadvertently expose the most vulnerable members of society to unforeseen dangers. Post navigation SubscribeStar Adult Imposes Content Bans, Igniting Creator Backlash and Renewing Debate Over Payment Processor Influence on NSFW Platforms.