top of page

Facebook, X, YouTube, and Others Agree to Stricter Hate Speech Policies Under New EU Guidelines

Writer's picture: Indya StoryIndya Story

Facebook, X, YouTube, and Others Agree to Stricter Hate Speech Policies Under New EU Guidelines

Major tech firms, including Meta and Google, have pledged to implement stronger measures against online hate speech under an updated code of conduct aligned with the EU's Digital Services Act. The initiative focuses on accountability and transparency in monitoring hate speech.


Facebook, X, YouTube, and Others Agree to Stricter Hate Speech Policies Under New EU Guidelines
Facebook, X, YouTube, and Others Agree to Stricter Hate Speech Policies Under New EU Guidelines - Photo by Facebook

Major tech companies, including Meta’s Facebook, Elon Musk’s X, Google’s YouTube, and TikTok, have committed to implementing stronger measures against online hate speech as part of an updated voluntary code of conduct. The revised code, aligned with the European Union’s Digital Services Act (DSA), represents a significant shift in how tech platforms address harmful content on their networks.

The European Commission welcomed the move, stressing the importance of holding platforms accountable for user-generated content. Henna Virkkunen, EU Commissioner for Tech, emphasized that illegal hate has no place in Europe, whether online or offline.

"I welcome the stakeholders' commitment to a strengthened Code of Conduct under the Digital Services Act," Virkkunen stated.


The updated code also includes a broader range of tech companies as signatories. Alongside major platforms like Facebook, X, and YouTube, other tech giants such as Instagram, LinkedIn, Snapchat, and TikTok have pledged to comply with the new guidelines. The voluntary code, initially established in May 2016, now incorporates specific provisions that require companies to adopt stricter measures in tackling online hate speech.

Key commitments in the revised code include granting not-for-profit or public entities with expertise in illegal hate speech the ability to monitor how tech companies handle hate speech reports. Companies must now review at least two-thirds of hate speech notifications they receive from these entities within 24 hours.

Additionally, the tech firms will implement automated detection tools to reduce the spread of harmful content. They have also agreed to offer greater transparency regarding the role of recommendation systems, as well as the organic and algorithmic reach of illegal content before its removal.

A significant feature of the updated code is increased accountability through the publication of country-level data on the types of hate speech encountered. This data will be categorized by race, ethnicity, religion, gender identity, and sexual orientation, providing regulators and the public with deeper insights into the scale of the issue and the platforms' efforts to address it.



Comments


Top Stories

bottom of page