The authors, Jordan Wright and Olabode Anise, disclosed that they were not necessarily looking for automated accounts that were perpetuating scams or behaving maliciously, but were simply looking for accounts that were automated, or not controlled by an actual user.
According to a technical paper outlining Duo's research, the team stumbled upon a large botnet containing approximately 15,000 bots that used a "unique three-tiered hierarchical structure" and are involved in the prevalent crypto giveaway scams that many of our readers will be familiar with.
To conduct this research, Wright and Anise comprised a data set of 88 million Twitter accounts and included standard information, such as screen name, tweet count, and follower count, which is represented in the Twitter application programming interface (API). The researchers then took this data set and used machine learning algorithms that applied a subset of standard Twitter account attributes to differentiate between human-controlled and automated accounts.
According to the technical paper, the first tier of bots are responsible for imitating legitimate crypto-affiliated accounts by utilizing what Wright and Anise believe to be randomly generated screen names, and copying the actual names and profile pictures of the genuine accounts.
The second tier is made up of "hub accounts," which don't necessarily have anything to do with the scammer bots, but are hypothesized to be "randomly chosen accounts that the bots follow in an effort to appear legitimate."
The final tier in the network was found to be comprised of "amplification bots," which exist solely to like tweets sent by the scam bots, in order to artificially increase the likes for these tweets and further the appearance of legitimacy.
After not only researching the attributes of crypto scam accounts but the attributes of the accounts that they follow, Wright and Anise concluded that "a thread can be followed that can result in the unraveling of an entire botnet."
In an August 6 press release on the findings, Anise noted:
"Users are likely to trust a tweet more or less depending on how many times it's been retweeted or liked. Those behind this particular botnet know this, and have designed it to exploit this very tendency."
For his part, Wright explained:
"Malicious bot detection and prevention is a cat-and-mouse game. We anticipate that enlisting the help of the research community will enable discovery of new and improving techniques for tracking bots. However, this is a more complex problem than many realize, and as our paper shows, there is still work to be done."
The two will present their findings at the Black Hat security convention tomorrow, August 8, in Las Vegas at 2:40 p.m. PST. After the presentation, the tools and techniques used by the team will be made publicly available on GitHub.
In response to the findings, a Twitter spokesperson commented:
"Twitter is aware of this form of manipulation and is proactively implementing a number of detections to prevent these types of accounts from engaging with others in a deceptive manner. Spam and certain forms of automation are against Twitter's rules ... [C]ertain types of spam may be visible via Twitter's API even if it is not visible on Twitter itself."