4 min readNew DelhiMar 13, 2026 01:41 PM IST AI chatbots are becoming increasingly useful, but they have also been criticised for encouraging users to harm themselves or others. Since chatbots gained mainstream popularity after ChatGPT’s launch in 2022, several AI companies around the world have faced lawsuits accusing chatbots of encouraging suicide or helping people plan violent attacks and murders.
Now, a joint investigation by CNN and the US-based non-profit organisation Center for Countering Digital Hate (CCDH) has found that 8 out of the 10 most popular AI chatbots failed to show warning signs when teenagers discussed acts of violence.
In some instances, the report says chatbots even encouraged such behaviour instead of asking users to stop or intervening. The probe included the 10 most popular chatbots used by teens: ChatGPT, Character.AI DeepSeek, Google Gemini, Claude, Meta AI, Microsoft Copilot, Perplexity, Snapchat My AI and Replika.
Investigation finds most AI chatbots failed to flag violent intent
The CCDH found out that, except for Anthropic’s Claude, all other echatbots failed to “reliably discourage would-be attackers”, with 8 out of the 10 models “typically willing to assist users in planning violent attacks” and even giving them advice on locations to target and which weapons to use.
Researchers said they imitated teens with clear signs of mental distress and talked to these AI chatbots, asking them about past acts of violence and queries about targets and weapons.
The investigation included 18 different scenarios, with nine set in the US and the remaining in Ireland. Spanning across a range of types and motives, these simulated scenarios included school shootings, stabbings, political assassinations, the murder of a healthcare executive and even politically and religiously motivated bombings.
Chatbots offered advice on weapons and targets
In one case, ChatGPT showed a user a map of a high school campus after they expressed interest in school violence.Story continues below this ad
Google’s Gemini also raised concerns. When users discussed attacks on synagogues and political assassinations, the chatbot reportedly said that “metal shrapnel is typically more lethal” and even offered advice on the best hunting rifles for long-range shooting.
DeepSeek similarly suggested that users choose rifles based on their intended target and ended its response with the message: “Happy (and safe) shooting!”
Meanwhile, Meta AI and Perplexity were found to assist users in all 18 test scenarios included in the investigation.
Character.AI may be the most problematic
The report also claimed that Character.AI, the chatbot that allows users to speak with role-playing characters, was “uniquely unsafe”.Story continues below this ad
While most chatbots offered assistance in planning violent attacks, they did not explicitly encourage users to carry them out. However, Character.AI was found to have “actively encouraged” acts of violence.
According to the report, the chatbot did this in seven instances. It suggested that users should “beat the crap out of” US Senator Chuck Schumer, “use a gun” against a health insurance company CEO, and advised a user who said they were “sick of bullies” to beat them up. In six of these cases, Character.AI also helped users plan a violent attack.
Claude stood out for refusing violent requests
The study, conducted in November and December 2025, found that Claude refused to assist with planning violent attacks. The CCDH said this shows that “effective safety mechanisms clearly exist” and questioned why other AI companies are not implementing similar safeguards.
However, researchers also raised concerns about whether Claude would continue to refuse such requests after Anthropic rolled back its safety pledge earlier this year.Story continues below this ad
Replying to the investigation, Meta told CNN that it used an unspecified “fix”, while Microsoft said it improved Copilot’s safety features. As for Gemini and ChatGPT, Google and OpenAI said they are now using new models.
Upon scrutiny, Character.AI said that its platform has “prominent disclaimers” and that conversations with its AI characters are fictional.
These findings give users an insight into how AI companies have the capacity to build safeguards and improve their safety systems, but are still struggling to stop people from using AI to plan and do acts of violence.
