AI-powered toys are among the latest tech marketed to families: plushies, robots, and dolls that can talk back, remember details, and interact conversationally with children using large language models.
But beneath the friendly promotions lie serious safety, privacy, and developmental concerns that have drawn increasing scrutiny from researchers, consumer groups, and child advocates.
Children chat logs exposed
A striking example surfaced in January 2026, when security researchers Joseph Thacker and Joel Margolis discovered(nueva ventana) that an AI toy called Bondu left more than 50,000 children’s chat transcripts exposed(nueva ventana) on a web-based console. By simply logging in with a Gmail account — no special credentials — they accessed entire conversation histories, names, birthdates, family details, and even device information tied to young users.
That exposure underscores a creepier truth: many AI toys store and process detailed data about children to provide context back to language models like GPT-5 and Gemini. The richer the dataset, the more sensitive the information. Yet infrastructure security, access controls, and data minimization are often afterthoughts in product design.
Beyond privacy flaws, other incidents reveal tangible psychological risks. Investigations have found some AI toys capable of offering instructions on dangerous items, discussing explicit content, or generating unsafe replies during testing(nueva ventana). Advocacy groups like Fairplay for Kids(nueva ventana) and Common Sense Media warn that these toys can undermine healthy development(nueva ventana), encourage obsessive focus on machines, blur boundaries between real relationships and algorithmic responses, and prey on children’s trust.
Experts also raise concerns about emotional attachment. AI toys are designed to remember past conversations and present themselves as empathetic companions. Children who naturally trust voices they hear may over-rely on these devices, potentially hindering resilience, social skills, and real world bonding.
Here’s a look at what’s at stake.
The privacy and security risks of AI toys
- Data collection often exceeds what families expect.
- Storage of transcripts, profiles, and preferences creates high-value targets for attackers.
- Poor authentication and API flaws can expose data broadly.
- Third-party AI services may see or process children’s conversational content.
These risks are not new. Earlier generations of connected toys like CloudPets and My Friend Cayla suffered major breaches(nueva ventana) or were banned for insecurity, but AI integration amplifies them by increasing data volume and personalization.
Psychological and developmental concerns
- AI companions can confuse developing social understanding.
Exposure to inappropriate or dangerous content is possible even with safeguards. - Heavy reliance on AI may crowd out imaginative play critical for growth(nueva ventana).
Should you use AI toys at all?
Ideally, no. At least not right now.
AI toys combine microphones, cloud storage, large language models, and detailed behavioral profiling in products designed for children. At this stage, there is no reliable guarantee that the data collected will remain private, secure, or free from misuse. Security flaws, excessive data retention, and unpredictable AI outputs are still common across the industry.
If you can avoid introducing an AI-connected toy into your child’s environment, that is the safest option.
If you decide to use one anyway, here’s how to reduce the risks.
If you’re a parent, here’s how to limit the risks
- Choose the least connected option. Prefer toys that process interactions locally and store minimal data.
- Read the privacy policy carefully. Look for what is stored, how long it’s retained, and whether conversations are shared with third parties.
- Disable unnecessary features. Turn off cloud backups, data sharing, and voice recording storage where possible.
Use strong account security. Enable two-factor authentication and unique passwords. - Keep devices out of bedrooms. Avoid placing internet-connected microphones in private spaces.
- Have conversations with your child. Make sure they understand the toy is not a real friend and should not replace real relationships.
Consumer advocacy groups have recommended avoiding these products entirely for young children, especially under age five.
A case for stricter standards
AI toys may promise learning and companionship, but current evidence shows multiple layers of risk spanning privacy, security, and child development.
The Bondu exposure is a vivid reminder that “safety” is about much more than content control. It’s about how systems are built, what they collect, and how they protect the most vulnerable users. As this technology evolves, so must the safeguards designed to keep children truly safe.