Wired recently reported that roughly 25% of Donald Trump’s Twitter fans are bots — automatic Twitter posting accounts unrelated to any actual humans. Apparently many of these bot accounts existed merely to make it look as though Trump was being supported by Latino voters — fake Twitter account with names like Francisco Palma and Alberto Contreras could give that impression. But Wired’s researchers saw more pernicious behavior, too. “We’ve caught bots disseminating lies, attacking people, and poisoning conversations,” they say.
If presidential candidates use Twitter bots, should you consider them for your business or organization? Probably not. Here’s why:
- They’re deceptive. Obviously, when Trump’s campaign set up a fake Twitter account for Alberto Contreras, they did so with the intention of tricking people. Doing things that are clearly intended to trick people is likely to lose you trust, which is some of the most important currency on the internet.
- They’re not very good at Twitter. Robots aren’t as good at human language as humans are. The Trump bots, for example, all used exactly the same phrase as their tweets. People don’t do that. Naturally, human beings who see more than one will feel suspicious, and the onslaught won’t have the intended effect.
- They can get your account blocked or suspended. There are automatic tweets that are expressly allowed by Twitter. For example, you can auto-tweet a newsletter or blog post, and there are TwitterBots that do potentially helpful things like tweeting traffic or weather info in response to people’s tweets about their travel plans. But Twitter generally is opposed to TwitterBots, saying that “most automation is detrimental to the user experience and frequently results in blocks and suspensions.” Automatically retweeting any tweet containing specific keywords, for example, is often a violation of Twitter rules.
- They can offend your customers. Most TwitterBots don’t do much; they tweet a single message or blast an ad at people who use a particular keyword. They don’t develop relationships with other people. But those that behave closest to humans can be the most offensive. For example, you could set up a TwitterBot to respond to any mention of joint pain with “I saw your tweet — this product worked for me!” and a link to your glucosamine product page. Some innocents might believe that this is a message from a concerned individual — until they get the identical response again, at which point they’ll be miffed. Soon they’ll be tweeting something like this message I saw on Twitter when I gathered the screen shot below: “Some weird joint pain account is lurking me and Stu’s Overwatch conversation.”
A more extreme example is Microsoft’s Tay, which was designed to act like a human. When actual humans decided to mess with Tay by teaching her to make blatantly racist comments, she posted some pretty appalling things. You can’t account for people’s sense of humor, but you can be sure that a bot can’t defend themselves against this kind of behavior.
- They don’t work. If you have a very specific short-term goal, such as convincing people that you have Hispanic fans following a political primary, a quick attack of the bots might work. Over the long run, they’re obviously fake. Like many tricks, they’re designed for a quick strike followed by a quick getaway. If your business, practice, or organization is more about building long-term relationships with the people you serve, TwitterBots can’t help you.