By Jonathan Lee
Cw: hate speech, extremism
Ever seen a comment on Facebook that really riled you up?
Probably. But I mean one that really floored you – stopped you mid-scroll, and in a red mist, made you click those three innocuous little dots on the right and Submit to Facebook for Review?
Something like this:
All of these comments were deemed to be acceptable within the Community Standards which Facebook uses to moderate objectionable content on its platform. Facebook purportedly censors hate speech “because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence” yet hate speech against ethnic, sexual and religious minorities flourishes on the social network, with a certain randomness to what its moderators consider to be unacceptable.
It is perhaps unsurprising that the method designed by young silicon valley tech-gurus to combat hate speech on their platform is to treat it as just another algorithm in the system.
Facebook users have the option to directly report content to Facebook moderators if they believe it goes against the Community Standards. Last year, The New York Times published an article detailing the process by which Facebook’s moderators assess content. Their investigation revealed the frustrations of an inside source who described the dizzying number of charts, texts and diagrams (running over 200 pages) they must memorise and, in a matter of seconds, apply to each submitted report to decide what content is in, and what is out.
The system is far from ideal. Facebook employs an army of more than 15,000 moderators who receive relatively little training and must apply the convoluted array of information they receive as a formula to each content report they receive. It is perhaps unsurprising that the method designed by young silicon valley tech-gurus to combat hate speech on their platform is to treat it as just another algorithm in the system.
The process needs an overhaul, that much is clear, but it’s a much trickier thing to accomplish than it may seem at first glance. Content that, to you, seems to be blatantly inciting hatred against a group of people, may be acceptable to others, and might depend highly on the context of the content.
I am an admin for the European Roma Rights Centre’s Facebook page, where we get our fair share of hate speech underneath our posts on a weekly basis. So I’m used to simply deleting comments which attack any minority group on the basis of innate protected characteristics (such as skin colour, sexuality, gender etc). We also remove any comments which call for real-world violence, use dehumanising language, or which attribute negative traits as an integral characteristic of an entire minority group. There is a difference between someone saying ‘gypsies are just work-shy, asocial parasites’ and ‘all of the gypsies I have met aren’t interested in getting a job’.
Facebook’s reach and influence is comparable to, or greater than, the power exercised by most nation states. If Facebook was a country it would be the most populated one on the planet, with a GDP in the top 100 countries.
To be sure, there is definitely a grey area in what constitutes hate speech and what is ‘just’ someone saying racist things. It’s something we discuss regularly at the ERRC and where the line between the two is drawn is not the same for everyone. I can only imagine the difficulties this must pose if you are monitoring an entire social media platform such as Facebook. Indeed, Facebook’s Hard Questions blog on fighting hate speech discusses a lot of these considerations.
However, a lot of the time, unambiguous comments which demonstrably fall under practically anyone’s classifications of hate speech are routinely permitted by Facebook moderators. In my book, for instance, referring to any group of people as a ‘cancer on society’ and using ‘final solution’ vocabulary to call for their removal is unacceptable and if allowed to become mainstreamed online, can have horrendous real-world consequences.
Arguably, as a private business Facebook don’t have to police their site to such an extent and are free to run their business however they like. It’s clearly working well for them. Facebook brought in an expected $55 billion in revenue in 2018 and has an ever expanding user base of over 2 billion monthly users, in addition to its billion Instagram users and 1.5 billion WhatsApp users (both of which are acquisitions of the company).
Facebook’s reach and influence is comparable to, or greater than, the power exercised by most nation states. If Facebook was a country it would be the most populated one on the planet, with a GDP in the top 100 countries. Therefore it makes sense that it should be subject to similar laws that governments implement in many nation states when it comes to hate speech.
After Mark Zuckerberg promised to crackdown on hate speech in January 2018, Facebook banned the Far-Right group Britain First and all of its leaders from the platform in March for repeatedly posting “content designed to incite animosity and hatred against minority groups.” The page had over 2 million Likes before it was shut down, and a counter-group named Report Britain First which organised weekly events to report hate speech from the far-right group to Facebook.
Despite the thousands of reports of objectionable content to Facebook from over 15,000 members of Report Britain First, it was Donald Trump sharing a Britain First video on Twitter in November 2017 that seemed to precipitate a clamp down on the group by social media platforms. The ensuing media storm around the decision, including a press statement from Facebook and the outcry from the global far-right, were all good business for Facebook. With its advert-driven profit model, the longer its users stay on their platform, the more money Facebook generates.
Graphic and controversial content is a real money spinner for Facebook so it’s not really in their interests to censor something that is keeping users on their website. Moderators can now put a viewing filter over such content which warns users about the nature of the image or video they are about to click. It’s like putting a flashing neon sign above it saying “Exciting stuff! Click here!”
When we see comments on Facebook which fall outside our normal bubble of personally held convictions, it jars our worldview and our assumptions of what ‘the people’ think and feel.
Facebook algorithms deliver constantly adjusting content to our news feeds, which are in line with our own moral convictions, political views and social norms. This makes us complacent about what we perceive public perception on these issues to be.
Far-right groups, nationalist parties and even hostile nation states make use of the way Facebook works to target people with fake news and specifically tailored content, creating an online echo chamber which foments and normalises extremist views. (Views which, of course, do not remain virtual.) Facebook’s inability or unwillingness to remove outright hate-speech from its platform helps to create these environments which spill into the real world. Racist politicians from India to Italy are emboldened to make comments that they could not have previously said publicly. Eventually the accumulation of hate speech on and offline results in further marginalisation, discrimination and hate crimes being committed against minority groups. While Facebook tries to get its act together, the online foot-soldiers of fascism are becoming more and more emboldened, and their vile rhetoric spills further into the domain of mainstream politics and public policy around the world.
If you want to see the other side of the looking glass, try liking a few pages you fundamentally disagree with to upset the algorithms. It will give you a peek into a whole other online world, and perhaps while you’re there you’ll also see something worth reporting.
Featured image: Author’s own
The Norwich Radical is non-profit and run by volunteers. All funds raised help cover the maintenance costs of our website, as well as contributing towards future projects and events. Please consider making a small contribution to fund a better media future.