30
30
Featured Image

Twitter’s Bot Policy: Who’s Really Human?

We use our Twitter Toolkit to test the effectiveness of Twitter’s new anti-bot policy, and dig deeper into the murky complexities of attempting to sort good robots from evil and man from machine.

On the 18th of December 2017, Twitter announced that they were changing their terms of service to prohibit “accounts that affiliate with organisations that use or promote violence against civilians to further their causes” and “content that glorifies violence or the perpetrators of a violent act.” While tweets in breach of these new rules could have likely been flagged and removed under the existing terms of service, the rule change signalled the beginning of a so-called ‘purge’ of far right and alt-right accounts from the site. Amid cries of political bias, Twitter put out this statement:

“Twitter’s tools are apolitical, and we enforce our rules without political bias. As part of our ongoing work in safety, we identify suspicious account behaviors that indicate automated activity or violations of our policies around having multiple accounts, or abuse. We also take action on any accounts we find that violate our terms of service, including asking account owners to confirm a phone number so we can confirm a human is behind it. That’s why some people may be experiencing suspensions or locks. This is part of our ongoing, comprehensive efforts to make Twitter safer and healthier for everyone.”

In other words, Twitter says these accounts weren’t banned for their politics, they were banned because they were bots. In fact, this was only the latest in a long line of actions Twitter have taken to be seen to be acting against bots on their platform. Since the 2016 US election, a narrative has been slowly growing around the idea of bots spreading fake news and misinformation to manipulate public opinion online. The practice is probably much older, but that’s when the media began to take notice. By summer 2017, Twitter stated “We’re working hard to detect spammy behaviors at source”, but noted that “to ensure people cannot circumvent these safeguards, we’re unable to share the details of these internal signals”.[1]

By September however, with allegations of Russian meddling in the US election via Twitter bots splashed across headlines, and with Twitter vice president Colin Cromwell appearing before congressional committees to discuss the issue, Twitter released a further statement laying out in far greater detail how they planned to combat bots and misinformation on their platform [2]. The plan included:

  • Making it harder to sign in from a suspicious connection
  • Finding accounts which were created together or at non-random intervals
  • Finding accounts which tweet at suspiciously regular intervals
  • Detecting when a user logs in from an unusual location (suggesting they’ve been hacked)
  • Finding accounts which post ‘suspicious content’ (though no word on what his entails)
  • Removing third party apps which produce spam or break the API terms of service
  • Requiring phone verification when challenged about being a bot, and trusting some telephone providers less than others

All in all, a fine plan in my opinion. So six months on, we used our spiffy new Twitter Toolkit [3] and what we’d learned looking at Trump’s bot army [4] to have a look at what impact it had.

First of all, it is much more difficult to create new bots on Twitter. The SMS verification for new accounts makes it very difficult for anyone to create more than a handful unless they have a deal with a telephone provider, which would suggest state-level actors or big operations in places with terrible business transparency. In any case, Twitter should quickly discover which phone providers can’t be trusted.

Is Twitter now a bot-free zone then? Heck no! The first and most obvious type of bot still lingers like calling cards in a telephone booth: accounts with photos of scantily clad models and bios like “See my pics! Go to sp4m.v1rus.com” sending unsolicited direct messages. These are quite clearly bots, and should have been easy for Twitter to find and flag for removal.

Less clear are the multitude of accounts which might follow you if you get a reputation as a user who follows back. Many of these accounts retweet the same messages, post the same images, or use emojis in the same distinctive style. These are made harder to detect by the armies of real twitter users who look like bots – participants in pyramid schemes, retweet races, competitions giving prizes to the spammiest users and sometimes human herd mentality combine to create a class of human interaction which looks pretty close to bot-chatter. Many of these accounts seem to tweet in Turkish, Persian and especially Arabic, languages which might lack speakers at Twitter HQ. Twitter have offices in over 30 countries around the world, but their only one in the Middle East is located in Dubai. Twitter’s own recruitment page says “We’re a very diverse office, with almost as many nationalities as we have employees” and “Our small office has a startup feeling”. I for one don’t expect to find too many linguists or experts in Arabic culture there, let alone Turkey where Twitter has one employee for an estimated [30 million][5] users.

There appears to be very little written about non-english language bots online or in academic literature. Given the narrative around Twitter and the Arab Spring, Twitter bots in the Middle East are to date horribly under-researched. So far as I can tell, we’re the first to discover and identify this class of bot in the region. We’ll take a closer look at these bots in another blog post, so check back regularly!

If Twitter were to crack down on this second class of bots, they would risk removing some of these users too. There is a simple solution to this problem however, and indeed, Twitter are already doing it. Just ask anyone who looks too much like a bot to verify their account by entering a code sent by SMS. This is exactly what happened to people pushed off Twitter in the ‘alt-right purge’.

There probably exists a third class of bot, indistinguishable from human users and with some trick to beat SMS verification. Tackling these bots would be hard, but it looks like Twitter haven’t even solved the easy problems yet.

Why are there still so many bots on Twitter then? It’s possible that they persist by slipping between the lines of Twitter’s anti-bot net, looking just human enough to be spared. Perhaps millions of this type of bot have been removed, but some few still somehow remain. Only Twitter have access to the numbers of accounts sanctioned. Perhaps there are so many bots on Twitter that they’re still working through them six months on.

However, it seems from Twitter’s press releases that, in light of all the discussion of ‘fake news’ and ‘election hacking’, their focus in clearing the bots from their site has been to prevent manipulation of the “trending topics”. They have made changes to the API terms of service, gutted the core functionality from TweetDeck and removed millions of bots who would ‘boost’ a hashtag into the trending charts. If Twitter’s engineers began the drive to delete bot accounts with fake new, psy-ops and media manipulation on their minds, it’s possible they forgot to deal with any other types of bots.

Of course, there are other possibilities. In October 2017, Twitter’s share price began tentatively climbing after two and half years of terrible performance. In silicon valley, investors are wowed by engagement stats, buzzwords and especially numbers of users (profitability and monetisation strategy take a back seat for some reason). With the first green shoots of growth in their valuation, perhaps Twitter’s engineers were under orders from Twitter’s management not to kick out too many accounts, not to rock the boat too hard and to do just enough to give the PR department a good press release.

Prince Alaweed bin Talal (right) (http://ichef.bbci.co.uk/news/976/cpsprodpb/1F6F/production/_85974080_gettyimages-106469040-copy.jpg)

A further, darker possibility exists. Russia is far from the only actor meddling in the politics of other countries, and Saudi Arabia is definitely one of them[6]. Twitter’s second largest shareholder is Saudi Prince Alwaleed bin Talal[7], a media mogul and member of the royal family. We can only speculate as the the influence he wields within Twitter.

Like so many of the private decisions tech giants make regarding the algorithms which influence our lives, we will likely never know.

[1] https://blog.twitter.com/official/en_us/topics/company/2017/Our-Approach-Bots-Misinformation.html
[2] https://blog.twitter.com/official/en_us/topics/company/2017/Update-Russian-Interference-in-2016--Election-Bots-and-Misinformation.html
[3] https://eticlab.co.uk/introducing-twitter-toolkit/
[4] https://eticlab.co.uk/frolic-political-twitter-bot-demonstratorresearch-platform/
[5] https://www.statista.com/statistics/284503/turkey-social-network-penetration/
[6] https://en.wikipedia.org/wiki/Saudi_Arabian-led_intervention_in_Yemen
[7] http://www.bbc.co.uk/newsbeat/article/34474798/meet-twitters-second-biggest-shareholder-saudi-prince-alwaleed-bin-talal

To discuss the ideas presented in this article please click here.