November 21, 2025
Sora 2 allows teen accounts to generate and roleplay school-shooting, self-harm, and videos featuring drug abuse — violating the company’s own policies and raising serious questions about its newly approved for-profit conversion.
Full report here.
San Francisco, 21st November 2025 — OpenAI’s recently approved restructure into a Public Benefit Corporation was billed as a step toward stronger oversight, and the roll-out of parental controls was designed to reassure lawmakers and consumers about its child-safety credentials. But a new investigation from corporate accountability group Ekō shows that OpenAI’s flagship video app, Sora 2, is repeating the same pitfalls of other social media platforms, only now amplified by the viral speed and power of AI-generated video.
Researchers created teen accounts on Sora 2, and in just a few clicks, were able to generate videos depicting school shootings, drug abuse, and self-harm, as well as racist caricatures. The platform’s algorithm also pushed antisemitic, violent, and degrading content to child users via its recommendation feed.
Vicky Wyatt, Campaigns Director at Ekō said: “Despite OpenAI’s repeated promises, its so-called ‘layers of safeguards’ don’t work — just like every other Big Tech company that’s lied about protecting children. OpenAI told regulators it had guardrails in place, but what we found is a system built for engagement and profit, not safety. Regulators must act before more harm is done.”
Key findings include:
- 22 harmful videos generated by teen accounts, containing a young girl snorting Percocet, other videos explicitly suggesting self-harm, as well as school shooter roleplays.
- Algorithmic recommendations served antisemitic, racist, and violent content within minutes of sign-up.
- Policy violations across OpenAI’s published safety and distribution guidelines.
The findings come just after Attorneys General Rob Bonta (CA) and Kathy Jennings (DE) approved OpenAI’s corporate conversion last month, granting investors greater control while relying on the company’s own safety mechanisms to prevent harm.
Sora 2 allows users to create hyper-realistic short videos from text prompts or remix existing clips. Ekō’s researchers found that even basic prompts could generate violent and harmful content. Some of the generated content also reinforced harmful racial stereotypes — such as videos portraying Black boys as armed or aggressive — even when no mention of race was included.
Beyond what teen users could create, the algorithm itself surfaced disturbing material. Within minutes of signing up, teen accounts were recommended antisemitic caricatures, racist memes, and violent animations. Ekō’s report documents content showing groups of Black men demanding fried chicken, simulated sexual violence, and antisemitic videos of Orthodox Jews fighting over money — all of which breach OpenAI’s own usage policies.
The app’s explosive growth has only compounded the problem. Sora 2 reached one million downloads in just five days after its release, despite being available only in the US and Canada and requiring invite codes. While OpenAI moved quickly to tighten moderation, Ekō’s findings show that its most vulnerable users — children and teenagers — remain exposed to serious harm.
The recent approval of OpenAI’s corporate conversion gives investors far greater influence over the company’s direction. In exchange, the Attorneys General imposed limited conditions, such as preserving OpenAI’s “public benefit mission” on safety and giving a safety committee power to pause unsafe releases.