By Eve Lacroix
Content warning : Article mentions rape threats, harassment, racism
In The Question Concerning Technology, Heidegger argues that technology is a means to an end, and that technology is a human activity. The two are not mutually exclusive. Technology is a means to facilitate our lives, eliminating manual or tedious or repetitive tasks. We use technology for our human need of community, connecting to friends and family through social media, searching for partners on dating apps.
As these technologies advance at faster and faster rates, our darker human impulses come into play. Behind the cloak of anonymous usernames, ‘never read the comment section’ has become a phrase to live by.
You may recall Microsoft’s short-lived foray into Twitter AI. Unveiled in March 2016, Twitterbot Tay was designed to emulate the “casual, jokey speech patterns of a stereotypical millennial” by learning from what was tweeted at her. Within 16 hours, Tay became a racist, Holocaust-denying, sexist bot tweeting her support for genocide. The account was quickly taken down and Microsoft issued an apology and deleted the most offensive tweets. They reactivated her account that night but her second public appearance as equally disastrous, with Tay tweeting about smoking weed in front of the police and then spamming her followers with the repeated tweet “you are too fast, please take a rest…”
Essentially, Tay became an internet troll, learning from the predictably bad behaviour of people online. One of the victims of Tay’s verbal attacks was Zoe Quinn, games developer and creator of Depression Quest. Quinn called out Twitter users for ruining the bot for her, explaining how Twitter’s “content-neutral algorithms” were to blame.
Is harassment any less harmful if it is online, or carried out by AI? Orchestrated attacks online can lead to doxing, to private information such as phone numbers and home addresses or nude pictures being released, and if you’re a woman online, to death and rape threats. At what point can you dissociate this “virtual threat” from your off-screen life?
Twitter has started to lose core users. Its stock prices fell sharply towards the end of last year, and new account growth has stagnated. Twitter also has a longstanding issue with online harassment, which particularly affects women, trans people, and women of colour. Twitter has always been reluctant to deal with its online harassment under the guise of preserving free speech. A string of journalists and celebrities, including actress Leslie Jones, Fifth Harmony singer Normani Kordei and Guardian journalist Lindy West, have publicly cited the harassment, sexist and racial abuse that they received on the website as their reason for leaving the platform.
After the election of the USA’s new President, Twitter conceded in a blog post that “The amount of abuse, bullying, and harassment we’ve seen across the Internet has risen sharply over the past few years.”
By hiring the people who are more likely to be harassed, such as women and people of colour, Twitter would quickly learn what needs to be changed
In order to combat harassment, Twitter, alongside Facebook and Microsoft, signed an EU code of conduct promising to delete all items of hate speech flagged on their website within 24 hours and retrain all their staff. However, the EU Commission reported that out of the three organisations, Twitter still had the slowest response rate of 48 hours. Twitter have rolled out some new services for tackling the issue, including a simplified reporting system for abusive comments and a mute button for key words, phrases and threads. Earlier this month, Twitter changed the iconic egg avatar into one that more closely resembles a human, their reasoning being that “We’ve noticed patterns of behaviour with accounts that are created only to harass others (…) This has created an association between the default egg profile photo and negative behaviour.” However, despite these actions their approach remains inconsistent. In December and March they came under fire for blocking, then reinstating former KKK Grand Wizard David Duke and Neo-Nazi Breitbart editor Richard B. Spencer.
Twitter users have been denouncing the platform’s non-response to harassment since its inception. Freedom of speech may be a fundamental right, but when it takes the form of a vicious attack, impeding on someone else’s safety and wellbeing, must we really tolerate it? If Twitter is struggling for fixes to their harassment problem, they need not look any further than their own users. Users have been calling for stricter identification rules on Twitter, just as Facebook requests an identification. Users have suggested Twitter hire more people to be part of their Support Team in order to deal with complaints in the promised 24-hour window. Users have pointed out that Twitter could manage the number of accounts a user can make by tracking IP addresses. By hiring the people who are more likely to be harassed, such as women and people of colour, Twitter would quickly learn what needs to be changed.
The online world colours our day to day life and interactions. If organisations like Twitter cannot live up to this responsibility, and lawmakers like the European Commission cannot hold them responsible, people will – by leaving the platform in droves.
Featured image credit: Andreas Eldh