Removing social media hate speech within 24 hours sounds like a good idea, but… | Fiona Martin, David Rolph

Posted on Posted in Media

In a bid to fight escalating anti-migrant propaganda, the this month released a blueprint for regulating online hate, which requires social media companies to take down racist material within 24 hours.

This joint code of conduct sounds like a positive political compromise. But it’s unclear how it will work in practice and how it will benefit the rest of the world’s social media users.

The agreement follows heavy pressure from French and German governments for Facebook to pull down racist posts, which have intensified following the recent refugee crisis.

German lawyers even started legal action against Facebook CEO Mark Zuckerberg and German manager Martin Ott, over the company’s failure to remove pages sporting Nazi imagery and calling for violence against migrants.

While those suits failed, Facebook, YouTube, Twitter and Microsoft have agreed to assess official reports of hate speech and “remove or disable” any that breach EU law.

The problem is that the code they signed has no concrete detail on how people can report violations, what evidence they need and how long it should take to see some action. Nor are there any accountability measures to make sure the social media giants meet the spirit of the agreement.

These are critical measures if the arrangement is to be more than a policy band-aid for a cankerous problem.