With the news of Elon Musk purchasing Twitter there have been two other related issues pulled through in many discussions: the spreading of misinformation/disinformation, and the role that bots play in doing so. There has also been a great deal of misunderstanding about both, and how easy they may or may not be to eliminate from the platform.
The most efficient and effective way to spread malicious disinformation campaigns at scale is through automation, meaning the use of bots. This is a similar approach that cybercriminals use when attacking sites with bot attacks using techniques such as credential stuffing, carding, and inventory hoarding. When hundreds of Twitter accounts tweet the same message at the same time, or an image is posted and re-posted on Facebook at the same time, it is likely the action of fake accounts that were generated by bots.
Carnegie Mellon estimates that bots are involved in 10-20% of the conversation on social media, particularly as it relates to natural disasters, elections, and other political and societal issues and events. During the June 2016 Brexit vote, for example, Russian accounts posted almost 45,000 messages pertaining to the EU referendum in 48 hours. More recently, bots have aggressively spread climate change disinformation in order to diminish support for policy. And it goes without saying how much of an outsized role social media - and bots - played in spreading vaccine misinformation, promoting 2020 election lies, and driving the 2021 attack on the U.S. Capitol.
The problem is the impact that these bots have on forming or changing public opinion, influencing the national conversation, and undermining our democracy. If social media companies know that they have bot problems, the question is: why can't they do something about them before the disinformation is spread?
The short answer is that bot operators have become very proficient and skillful at disguising themselves as humans and fooling ineffective security tools like CAPTCHAs. So the bottom line is that staying ahead of an intelligent, collaborative, and motivated bot supply chain may not be as easy as Musk thinks it is.
According to Sam Crowther, CEO and founder of Kasada, "Defeating spam bots from Twitter will be an enormous undertaking. The bot ecosystem has evolved more in the past two years than in the past decade. Bot operators look and act just like humans by using residential proxy networks and highly customized open-source automation tools. Spam bots will remain pervasive because a motivated bot ecosystem will always find new ways to evade detection by retooling and reverse engineering traditional rule-based anti-bot systems. Most anti-bot solutions on the market today are reactive and haven't been able to keep pace with adversaries because there are inherent issues with the way bot management solutions originated and evolved.
What's needed is a modern, proactive approach where anti-bot tools can adapt and change as fast, if not faster, than the attackers working against them. The key is to stop automation from generating fake accounts from the onset before any damage can be done."