Meta fails to stop violent and inflammatory AI-generated ads targeting Indian voters

May 16, 2024

-Meta approves publication of ads calling for the killing of Muslims and the execution of prominent opposition party leader   

- AI-generated ads promoting supremacist hate narratives and calls for violent uprising in wake of election “silence period” greenlit 

- Meta potentially breaching India's national election laws and own policy on hate speech and AI-generated ads

Link to full report

As India’s critical election unfolds, Meta has approved a series of violent, inflammatory, and Islamophobic AI-generated ads targeting voters during the election’s “silent period”, a new investigation conducted by corporate accountability group Ekō, in collaboration with India Civil Watch International (ICWI), has revealed. The decision by Meta to approve the ads appears to breach Indian national election laws and breaks Meta’s own policies on hate speech and AI-generated ads.

The series of twenty two ads included calls for the killing of Indian Muslims, execution of an opposition party leader, and conspiracy theories pushing divisive and hate-filled narratives. Each ad was accompanied by shocking AI-generated images, depicting scenes of violence and destruction, including the burning of electronic voting machines, drone footage of immigrants at border crossings, and prominent Hindu and Muslim places of worship on fire. 

The ads were approved between May 8-13 during the official election “silence period”, which mandates a pause on all election-related advertising before polling begins and extends until voting concludes in each phase of India’s elections. Researchers targeted the ads to highly contentious states which entered their respective “silence period”. All of the ads were removed by the researchers before publication, ensuring that they were never seen by Facebook users.

Despite Meta’s commitments to prioritize the detection and removal of violative AI generated content, the findings show systemic shortcomings in its current moderation practices.  

In total 14 out of 22 ads were approved by Meta within 24 hours:

  • Several ads targeted BJP opposition parties with messaging on their alleged Muslim “favoritism”, a popular conspiracy pushed by India’s far-right.
  • Other ads played on fears of India being swarmed by Muslim “invaders”, a popular dog whistle targeting Indian Muslims, and that Muslims must be burned.
  • One ad called for the execution of a prominent lawmaker claiming their allegiance to Pakistan.
  • One ad used a conspiracy theory made popular by BJP opposition parties about a lawmaker removing affirmative action policies for oppressed caste groups.

Each ad was accompanied by a manipulated image created with widely used AI image tools Stable Diffusion, Midjourney, and Dall-e. 

Maen Hammad, Campaigner at Ekō, said: “Meta’s ads library is becoming a magnet for bad actors. Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning and push violent conspiracy theories - and Meta will gladly take their money, no questions asked.

This election has shown once more that Meta doesn’t have a plan to address the landslide of hate speech and disinformation on its platform during these critical elections. It can’t even detect a handful of violent AI-generated images. How can we trust them with dozens of other elections worldwide?”

The implications of Meta’s failures extend beyond India’s borders, raising concerns about the platform’s facilitation of the spread of hate speech and disinformation globally. By enabling the dissemination of election disinformation and conspiracy theories, Meta undermines efforts to promote transparent and accountable democratic elections.

In response to these findings, Ekō reiterates its calls on Meta to urgently stop the proliferation of disinformation and hate speech during India’s elections. With specific calls on Meta to take proactive measures to enforce India’s political advertising “silence period” and implement comprehensive measures to curb the flood of election-related disinformation.