Meta Under Pressure From Investors as Disinformation Impedes Flood Rescue Efforts in Brazil

May 22, 2024

- Shareholders set to demand greater accountability and transparency in non-US markets

- Conspiracy theories about floods, pushed by Bolsonaro allies, spreading across platform

- Comes as Meta facilitates the spread of violent disinformation in India during critical election

22 May, 2024, Brazil – As the rapid spread of online disinformation hampers flood rescue efforts in Rio Grande do Sul state, investors are set to vote on a shareholder resolution demanding greater accountability from Meta for human rights impacts in Brazil and other key markets.

The resolution, is lead filed by AkademikerPension of Denmark and co-filed by Ekō member shareholders and Storebrand of Norway. It urges the social media giant to come clean about the “effectiveness of measures it is taking to prevent and mitigate human rights risks” in its five largest non-US markets. This includes the spread of disinformation, hate speech and incitement to violence. It will be voted on during Meta's annual shareholder meeting on 29 May.

The resolution comes amidst the rescue response to historical flooding in Brazil's south, which has been significantly impeded by disinformation and conspiracy theories pushed by the far-right and pro-Bolsonaro allies. The floods have already killed 161 people and displaced over 580,000.

Brazilian fact-checkers are working around the clock to bust the infodemic, such as:

  • Allegations that a local government is repackaging public donations with official logos to appear as if they are from federal agencies.
  • Claims that Lula’s government denied donations from certain countries. 
  • False claims that “Starlink”, Musk's satellite internet, is the only available internet service in the state; at least two other operators are fully functional.
  • An old audio clip from 2019 regarding Israeli military rescue support, circulated as if it were current.
  • Inflated numbers of hospital deaths.

The most damaging narratives discredit federal authorities, suggesting they are blocking or denying rescue efforts and donations to victims. Other disinformation includes false claims related to climate change, international celebrities criticizing the government, and the use of unrelated images and videos to exaggerate the tragedy’s death toll.

Meta has faced intense scrutiny for failing to stop far-right networks from weaponizing the platform to spread hate speech and disinformation in major markets such as India and Brazil. Despite the platform’s claims to have put in place a robust team of content reviewers and made significant investments in safety and security, the platform has consistently failed to enforce its own rules, putting critical elections across the world at risk.

A series of investigations published by Ekō during India’s ongoing election have revealed how Meta has facilitated far-right networks in reaching millions of Indian voters with Islamophobic hate ads. The latest report, published this week, documents how Meta approved AI-manipulated political ads seeking to spread disinformation and incite religious violence during India's elections. 

Flora Rebello Arduini, Senior Tech & Human Rights Advisor said: "Instead of being able to focus single-mindedly on flood recovery, Brazil’s government is having to fight a dangerous disinformation crisis too. Meta appears to have learned nothing since the pandemic, or the conspiracy-fuelled crowds who stormed Congress in 2022, and its reckless failures are still putting lives on the line, especially in Brazil, India and other non-English speaking countries. This shareholder resolution is needed now more than ever: Meta executives must be held accountable."

Christina O’Connell, Ekō Senior Manager of Shareholder Engagement said: “Meta’s repeated failures to curb harmful content on its platforms pose significant risks to human rights and democracy worldwide. Investors deserve transparency and accountability from Meta, especially in its largest markets where the impact of disinformation and hate speech is most severe. This resolution is a crucial step towards ensuring that Meta aligns its practices with its stated commitments to safety and security.”

Additional Resolution on Generative AI Risks 

In addition, Ekō member shareholders have co-filed a second resolution with Arjuna Capital and OpenMic. This resolution urges Meta to address the significant risks posed by the deployment of generative AI tools like ChatGPT and Midjourney.

Ekō researchers submitted a series of ads for approval targeted at voters in India, which contained inflammatory AI-generated images, including ballot boxes being destroyed and well-known mosques and temples being set on fire. Meta approved the majority of the ads and did not detect or label any of the AI-generated imagery, raising serious doubts about Meta's content moderation practices. All of the ads were removed by the researchers before publication, ensuring that they were never seen by Facebook users.

Ekō's History in Brazil and India

During the 2022 Presidential elections in Brazil, Ekō published a series of investigations proving how the far-right and anti-democratic forces inundated the platform with conspiracy theories and election disinformation. This softened the ground for the attempted coup by pro-Bolsonaro forces on January 8th. Meta directly profited from Bolsonaro’s allies using Facebook ads to whip up political division and violence, with the platform’s toxic recommender systems amplifying their message to millions of voters in Brazil. 

Similarly, during this month’s critical elections in India, it’s clear that bad actors are using the same playbook to disrupt the elections. Ekō researchers uncovered a million-dollar network of shadow advertisers pumping out conspiracy theories and incitement to violence on Meta’s ads platform. In addition, Facebook approved a series of ads containing violent speech, including calls to kill Muslims and AI-generated images. These ads clearly breached Meta’s new policies.

In both 2021 in India and during Brazil’s 2022 Presidential elections, supremacist and anti-democratic forces flooded the platform with both paid and organic content, which fueled real-world violence and, in Brazil’s case, an attempted coup.

For more information or interviews contact: Maen Hammad, +12484448806 or at maen@eko.org