Lessons for Elon Musk from Meta’s content moderation

Capture investment opportunities created by megatrends

Lessons for Elon Musk from Meta’s content moderation

23 June 2022 Technology & Digitalization 0

How often do the moderators appointed to keep unwanted material off social networks get it wrong? A lot more than you might think.

In its first 15 months of existence, the independent board created by Facebook (now renamed Meta) to oversee the company’s moderation practices came up with a sample of 130 content removal decisions it thought might be questionable.

Reviewing these cases, Meta itself concluded that its moderators had applied the company’s own rules incorrectly 51 times: In essence, they had failed at their job around 40 per cent of the time.

If this sample is anywhere close to representative of moderation practices more broadly, it is the tip of a very big iceberg. This week, the Meta oversight board said it had received 1.1mn complaints in all about the way the company’s Facebook and Instagram services had acted against user content.

The sheer scale of the dissatisfaction — and the apparently high failure rate in judgments about what users should see — might seem to support Elon Musk’s argument for putting fewer controls on online speech. Musk has claimed that a big reason for his attempt to buy Twitter is to lift the barriers to online communication, provided it is legal. But he has tacitly changed course in recent weeks, conceding that things won’t be as simple as he suggested.

At an Financial Times event last month, Musk said he planned to block content on Twitter that was “destructive to the world”, while also saying he would use tactics like limiting the spread of some tweets or temporarily suspending some users’ accounts. Last week, he also told Twitter employees he planned to act against harassment on the network.

This suggests he will face many of the same challenges as Meta. For Facebook’s owner, bullying and harassment have been the biggest single category of user unhappiness, accounting for nearly a third of complaints to the oversight board (the two other main sources of discontent, fuelling half of the complaints to the board, concern Meta’s actions against hate speech, and against violence and incitement).

If Musk wanted to limit the discontent his own efforts at controlling content will stir up, he could do worse than look to Meta’s example. Letting an outside board second-guess some of its decisions has meant giving up power over an important aspect of its user experience. But this has the benefit of distancing the company from some of the controversy, shifting at least partial responsibility on to an independent group designed to act like an outsourced conscience.

Hiving off tricky decisions like this also helps to shine a spotlight on the sheer complexity involved in applying hard and fast rules to something as malleable as language. The review process is arduous. In its first comprehensive report this week, the board said it took on only 20 cases in its first 15 months, and ended up over-ruling Meta’s moderation decisions in 14 of them — a minuscule proportion of the total number of complaints it received.

Publicising the details of individual moderation decisions is also a good way to neutralise critics who may be tempted to make sweeping judgments about the rights and wrongs of social media “censorship”. There is little that is black and white here, only shades of grey.

It also doesn’t hurt that, in pushing for more influence, the oversight board is becoming something of a thorn in Meta’s side. It has agitated for more data about how moderation works, and nudged the company to be more transparent about its decisions to users. It is also trying to have a say in the content policies Meta comes up with for the metaverse before that immersive new online environment even takes shape.

This all helps to keep Meta on its toes, while adding to the perception that it is responding to outside pressure — something that might lessen calls for more direct government regulation.

Yet, as the 40 per cent error rate for a small sample of moderation decisions shows, the effort remains woefully inadequate. Human speech is probably too nuanced — and human beings themselves too fallible in their judgments — to ever make content rules susceptible to rigorous enforcement.

Should he actually go through with buying Twitter, these are lessons Musk may soon learn to his cost. On the other hand, given his appetite for controversy, jumping into the centre of an almighty battle over online content might be exactly what the world’s richest man has in mind.

richard.waters@ft.com