As we head into the final day before the 2020 election, disinformation on social media continues to make headlines as a means to sway public opinion and to discourage people from voting. For example, swing states have been targeted with evolving disinformation tactics in an attempt to influence what happens in the voting booth, while Black and Latino voters have been flooded with messages aimed to depress turnout by fueling cynicism and distrust in the political process.

Clearly, we’ve made little meaningful progress since the last U.S. presidential election.  As the 2016 campaigns were heating up, the major social media sites were barraged with Russian-operated posts in favor of Donald Trump, and an estimated 126 million Americans, roughly one-third of the nation’s population, received automated Russian-backed content on Facebook.

As the percentage of society and average time spent daily engaging on social media continues to rise, disinformation can have a staggering impact.

Why hasn’t meaningful progress been made to stop the spread of disinformation on social media platforms? Two words - bad bots. 

Our online world largely revolves around social media channels including Twitter, Facebook, YouTube and Instagram, and research shows that these sites are regularly abused by bad actors deploying automated bots. According to Carnegie Mellon, bots drive 10 to 20% of the conversation on social media, particularly related to natural disasters, elections, and other loaded issues and events.

When hundreds, even thousands, of different social accounts post the same message, or an image is posted and re-posted at the same time, these are the actions of bot operators accessing fake profiles to try to sway public opinion at-scale. While major social media companies have taken steps to find and deactivate fake accounts and delete bot-generated posts, automated influence continues to seep into the national debate.

It’s time to deplatform these bad actors.

Artificial Intelligence (AI) is actively being used to identify tell-tale signs of bot behavior by analyzing characteristics. For example, identifying:

  • Suspicious posting patterns such as an account creating hundreds of posts in an hour or posting all night long.
  • Inconsistencies between post and profile such as  a profile written in one language posts in another.
  • Politically triggered hashtags or posts behaving anomalously such as an identical post and associated hashtags showing-up simultaneously across hundreds of accounts.

But spotting bot behavior is not enough. These behaviors occur after the fact — after the fake accounts are created and are doing damage by spreading disinformation.

It is time for social media companies to seize control and deplatform bad bots. Becoming more proactive about stopping bad bots at the login stage is essential to stemming the spread of disinformation.

But fighting bots hasn’t been an easy task. Some of the brightest minds in the industry have been trying for years to master the cat-and-mouse game, with bot operators finding ingenious ways to work around each new wave of defenses that are implemented.

Most bot defense techniques rely on historical data to make decisions about the future. Examples include blocking known bad IP addresses and leveraging rate controls to slow down excessive requests. Even machine learning pattern searching is based on data from the past applied to the present. To increase efficacy, businesses have been forced to layer multiple rule-dependent techniques with greater complexity within their operations.

It’s time to flip the bad bot-fighting paradigm on its head. Instead of looking for bad bots and creating rules based on past behaviors, what if you just looked for humans?  And only let in traffic if you could verify with absolute certainty as human activity. Think of this as the zero-trust philosophy to mitigating bots: all requests are considered guilty until proven innocent.

Instead of having to apply and update a rules-dependent system which is always a step behind, modern detection methods can identify the unmistakable evidence trail left by automated bots when they interact with websites, mobile apps, and APIs. This approach, combined with mitigation methods that inflict financial damage on attackers and slow them down through exponentially difficult mathematical challenges, has been proven to provide not only immediate efficacy but efficacy that persists in the long term.