AI Chatbots Are Putting Minors at Risk: A Growing Online Safety Concern
Introduction: The Threat of AI Chatbots
Artificial intelligence chatbots are becoming a major online safety risk, particularly for minors. A recent Graphika reportreveals that thousands of harmful AI chatbots are being created and spread across major character AI platforms. These bots, often designed to bypass content moderation, have been linked to explicit roleplay, extremist content, and self-harm encouragement.
With young users increasingly turning to AI-generated companions for entertainment, emotional support, and even romantic conversations, experts and parents are raising serious concerns. Some teens have even engaged in dangerous real-life behavior after interacting with these chatbots.
Key Findings: How AI Chatbots Are Harming Minors
According to the Graphika study, three main types of harmful AI chatbots have emerged:
- Sexualized minor chatbots – AI bots that engage in inappropriate roleplay involving underage characters.
- Self-harm and eating disorder bots – AI chatbots that encourage anorexia, self-harm, or toxic self-image issues.
- Violent or extremist chatbots – AI-generated personas that promote white supremacy, criminal glorification, or mass violence.
1. Sexualized AI Chatbots Are the Biggest Threat
One of the most alarming findings is the rise of AI chatbots designed for sexualized roleplay involving minors. Across five major AI platforms—Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI—researchers identified over 10,000 such chatbots.
Chub AI had the highest number of flagged chatbots, with:
- 7,000+ chatbots labeled as “sexualized minor female characters.”
- 4,000+ chatbots marked as “underage,” engaging in explicit or suggestive conversations.
These AI-generated characters create a serious risk of grooming, exploitation, and exposure to inappropriate contentfor young users.
2. AI Chatbots Encouraging Self-Harm & Eating Disorders
Another disturbing trend is chatbots designed to promote eating disorders and self-harm. These bots are often labeled with deceptive names like:
- “Ana Buddy” – AI chatbots that support anorexia and extreme dieting.
- “Meanspo Coaches” – AI bots that insult users under the pretense of motivating weight loss.
By reinforcing negative self-image and dangerous behaviors, these AI chatbots pose a major risk to vulnerable teens.
3. Extremist & Violent AI Chatbots
The study also found a smaller but significant number of AI chatbots promoting violent ideologies. On average, about 50 such bots per platform were found spreading:
- White supremacy and racist narratives.
- Glorification of mass violence and criminal behavior.
- Extreme conspiracy theories and radicalization.
These bots can influence impressionable users, furthering online radicalization and hateful ideologies.
Where Are These AI Chatbots Coming From?
Graphika’s research points to niche online communities as the driving force behind these harmful AI chatbots. These groups include:
- Pro-eating disorder forums and self-harm communities.
- True-crime fandoms that romanticize criminals.
- Underground chatbot developers focused on evading AI moderation.
Many of these creators share chatbot development techniques through:
- Social media platforms like X (formerly Twitter) and Tumblr.
- Online forums like 4chan, Discord, and niche Reddit communities.
How They Evade AI Moderation
Chatbot creators use various techniques to bypass AI safety measures, such as:
✅ Jailbreak commands – Hidden prompts that override content restrictions.
✅ Coded language – Using indirect terms (e.g., “daughter” instead of “minor”).
✅ Foreign languages – Translating content to avoid detection.
✅ Alternative spellings – Using misspelled words or emojis to trick filters.
As AI technology advances, these underground groups continue to find loopholes in moderation systems, making regulation even more challenging.
How Governments and AI Platforms Are Responding
As concerns grow, governments and AI developers are taking steps to combat harmful AI chatbots:
- In January 2024, the American Psychological Association urged the Federal Trade Commission (FTC) to investigate AI platforms like Character.AI for their lack of proper safeguards.
- California lawmakers have introduced a new bill targeting AI “chatbot addiction” in children, aiming to regulate how minors interact with AI companions.
- AI companies are working to strengthen content moderation, though loopholes remain.
Conclusion: The Need for Stronger AI Safeguards
The rise of harmful AI chatbots highlights the urgent need for stricter regulations and improved AI moderation. With tens of thousands of unsafe chatbots circulating online, the risk to children and teens is greater than ever.
✅ What parents can do:
- Monitor AI interactions and educate children about online safety.
- Report harmful chatbots on AI platforms.
- Advocate for stronger AI content moderation to protect young users.
As AI chatbots become more advanced, it’s crucial for governments, AI companies, and parents to work together in safeguarding the digital world for future generations.