Accessibility Settings

color options

monochrome muted color dark

reading tools

isolation ruler



How to Identify Bots, Trolls, and Botnets

Over the past two years, words like “bots,” “botnets” and “trolls” have entered mainstream conversations about social networks and their impact on democracies. However, malicious social media accounts are often mislabeled, derailing the conversations about them from substance to definitions.

This post lays out some of the working definitions and methodologies used by the Atlantic Council’s Digital Forensic Research Lab (@DFRLab) to identify, expose and explain disinformation online.

What Is a Bot?

A bot is an automated social media account run by an algorithm, rather than a real person. In other words, a bot is designed to make posts without human intervention. @DFRLab previously provided 12 indicators that help identify a bot. The three key bot indicators are anonymity, high levels of activity, and amplification of particular users, topics or hashtags.

If an account writes individual posts and it comments, replies or otherwise engages with other users’ posts, then the account cannot be classified as a bot.

Bots are predominantly found on Twitter and other social networks that allow users to create multiple accounts.

Is There a Way to Tell If an Account Is Not a Bot?

The easiest way to check if an account is not a bot is to look at the tweets they have written themselves. An easy way to do that is by using a simple search function in Twitter’s search bar (i.e. search “from: @handle”).

If the tweets returned by the search are authentic (i.e. they were not copied from another user), it is highly unlikely that the account in question is a bot.

What Is the Difference Between a Troll and a Bot?

A troll is a person who intentionally initiates online conflict or offends other users to distract and sow divisions by posting inflammatory or off-topic posts in an online community or a social network. Their goal is to provoke others into an emotional response and derail discussions.

Euler diagram representing the lack of overlap between bot and troll accounts.

A troll is different from a bot because a troll is a real user, whereas bots are automated. The two types of accounts are mutually exclusive.

Trolling as an activity, however, is not limited to trolls alone. @DFRLab has observed trolls use bots to amplify some of their messages. For example, in August 2017, troll accounts amplified by bots targeted the @DFRLab after an article on protests in Charlottesville, Virginia. In this regard, bots can, and have been, used for the purposes of trolling.

What Is a Botnet?

A botnet is a network of bot accounts managed by the same individual or group. Those who manage botnets, which require original human input prior to deployment, are referred to as bot herders or shepherds. Bots operate in networks because they are designed to manufacture social media engagement that make the topic on which the botnet deploys appear more heavily engaged by “real” users than it actually is. On social media platforms, engagement begets more engagement, so a successful botnet puts the topic it is deployed on in front of more real users.

What Do Botnets Do, Exactly?

The goal of a botnet is to make a hashtag, user or keyword appear more talked about (positively or negatively) or popular than it really is. Bots target social media algorithms to influence the trending section, which would in turn expose unsuspecting users to conversations amplified by bots.

Botnets rarely target human users and when they do, it is to spam or generally harass them, not to actively attempt to change their opinion or political views.

How Can You Recognize a Botnet?

In light of Twitter’s bot purge and enhanced detection methodology, botnet herders have become more careful, making individual bots more difficult to spot. An alternative to individual bot identification is analyzing patterns of large botnets to confirm whether its individual accounts are bots.

@DFRLab has identified six indicators that could help identify a botnet. If you come across a network of accounts that you suspect might be a part of a botnet, pay attention to the list below.

When analyzing botnets, it is important to remember that no one indicator is sufficient to conclude that suspicious accounts are part of a botnet. Such statements should be supported by, at the very least, three botnet indicators.

1. Pattern of Speech

Bots run by an algorithm are programmed to use the same pattern of speech. If you come across several accounts using the exact same pattern of speech, for example tweeting out news articles using the headline as the text of the tweet, it is likely these accounts are run by the same algorithm.

Ahead of the elections in Malaysia, @DFRLab found 22,000 bots, all of which were using the exact same pattern of speech. Each bot used two hashtags targeting the opposition coalition and also tagged between 13 and 16 real users to encourage them to get involved in the conversation.

Raw data of the tweets posted by bot accounts ahead of the elections in Malaysia. Source: Sysomos

2. Identical Posts

Another botnet indicator is identical posting. Because most bots are very simple computer programs, they are incapable of producing authentic content. As a result, most bots tweet out identical posts.

@DFRLab’s analysis of a Twitter campaign urging to cancel an Islamophobic cartoon contest in the Netherlands revealed dozens of accounts posting identical tweets.

Although individual accounts were too new to have clear bot indicators, their group behavior revealed them as likely part of the same botnet.

3. Handle Patterns

Another way to identify large botnets is to look at the handle patterns of the suspected accounts. Bot creators often use the same handle pattern when naming their bots.

For example, in January 2018, @DFRLab came across a likely botnet, in which each bot had an eight digit number at the end of its handle.

One more tip is systematic alphanumerical handles. The bots from a botnet that @DFRLab discovered ahead of the Malaysian elections all used 15-symbol alphanumerical handles.

Screenshot of Twitter handles that used #SayNOtoPH and #KalahkanPakatan hashtags ahead of the Malaysian elections (Source: Sysomos)

4. Date and Time of Creation

Bots that belong to the same botnet tend to share a similar date of creation. If you come across dozens of accounts created on the same day or over the course of the same week, it is an indicator that these accounts could be a part of the same botnet.

Raw data showing the date of creation of Twitter accounts that amplified PRI party’s candidates in the state of Puebla. Source: Sysomos

5. Identical Twitter Activity

Another botnet giveaway is identical activity. If multiple accounts perform the exact same tasks or engage in the exact same way on Twitter, they are likely to be a part of the same botnet.

For example, the botnet that targeted the @DFRLab back in August 2017, had mass-followed three seemingly unconnected accounts — NATO spokesperson Oana Lungescu, the suspected bot-herder (@belyjchelovek), and an account with a cat as its profile picture (@gagarinprosti).

Botnet targeting the @DFRLab followed the same accounts. Source: Twitter

Such unique activity, for example following the same unrelated users in a similar order, cannot be a mere coincidence when done by a number of unconnected accounts and, therefore, serves as a strong botnet indicator.

6. Location

A final indicator, which is especially common among political botnets, is one location shared by many suspicious accounts. Political bot herders tend to use the location, where the candidate or the party that they are promoting is running, to attempt to get their content trending in that particular constituency.

For example, ahead of elections in Mexico, a botnet promoting two PRI party candidates in the state of Puebla used Puebla as their location. This was likely done to ensure that real Twitter users from Puebla saw the bot-amplified tweets and posts.

Are All Bots Political Bots?

No, the majority of bots are commercial bot accounts, meaning that they are run by groups and individuals who amplify whatever content they are paid to promote. Commercial bots can be hired to promote political content.

Political bots, on the other hand, are created for the sole purpose of amplifying political content of a particular party, candidate, interest group or viewpoint. @DFRLab found several political botnets promoting PRI party’s candidates in the state of Puebla ahead of the Mexican elections.

Are All Russian Bots Affiliated with the Russian Government?

No, many botnets that have Russian-sounding/Cyrillic alphabet handles or usernames are run by entrepreneurial Russians looking to make a living online. There are many companies and individuals openly selling Twitter, Facebook and YouTube followers/subscribers and engagement, retweets, and shares on the Russian internet. Although their services are very cheap ($3 for 1,000 followers), a bot herder with 1,000 bots could make more than $33 per day by following 10 users daily. It means they could make $900 per month doing just that, which is two times higher than an average salary in Russia.

Translation from Russian: 1,000 followers for 220 RUB ($3.34), 1,000 retweets for RUB 130 ($1.97), 1,000 likes for RUB 120 ($1.82). Source:

@DFRLab observed commercial Russian botnets amplifying political content worldwide. Ahead of the elections in Mexico, for example, we found a botnet amplifying the Green Party in Mexico. These bots, however, were not political and amplified a variety of different accounts ranging from a Japanese tourism mascot to the CEO of an insurance agency.


Bots, botnets and trolls are easy to tell apart and identify with the right methodology and tools. But remember, don’t mistake an account for a bot or a troll until you can meticulously prove it.

This post first appeared on the Medium page of the Atlantic Council’s Digital Forensic Research Lab and is reproduced here with permission.

Donara Barojan is an assistant director for Research and Development at the DFRLab. She is based at the NATO StratCom Center of Excellence in Riga, Latvia. At the DFRLab, Donara analyzes online disinformation campaigns in the US and Europe with a focus on disinformation in elections. She also leads DFR Lab’s efforts to develop tools to monitor and counter disinformation online.

Republish our articles for free, online or in print, under a Creative Commons license.

Republish this article

Material from GIJN’s website is generally available for republication under a Creative Commons Attribution-NonCommercial 4.0 International license. Images usually are published under a different license, so we advise you to use alternatives or contact us regarding permission. Here are our full terms for republication. You must credit the author, link to the original story, and name GIJN as the first publisher. For any queries or to send us a courtesy republication note, write to

Read Next