Wired magazine explained how organized groups and campaigns are messing with Amazon’s algorithm in an intentional effort to spread medical misinformation. Since the algorithms that determine what websites show us are not making human, morality-based decisions, they can be tricked into showing people misinformation fairly easily.
“They’re engineered to show us things we are statistically likely to want to see,” Wired points out, “content that people similar to us have found engaging—even if it’s stuff that’s factually unreliable or potentially harmful.” Human-curated information doesn’t work on the same system.
Factors like the number of positive reviews a book has can be manipulated easily, and can also affect Amazon recommendations. Sure enough, when we checked recommendations for “vaccination” at Amazon, we were immediately offered anti-vaccination books.
Facebook started recommendations for the same search with a link to their own new policy designed to decrease misinformation. Then they went right on to anti-vaccination groups.
Why do algorithms encourage misinformation?
Algorithms don’t focus on misinformation. But they are intended to provide specific business-related results.
For example, Google’s search algorithm is designed to give searchers the best answer to their question. If we search with Google.com and get bad answers, we might go to some other search engine. Since Google’s revenue relies heavily on our seeing ads, this would be bad for business.
We’ve seen before that searchers for medical information typically prefer to access easier, more engaging content. We’ve also seen that easier, more engaging content is often less accurate. This helps to explain the high proportion of inaccurate information offered to searchers.The solution to medical misinformation? Better, more accessible accurate medical information. Click To Tweet
YouTube is worse. That’s because the algorithm isn’t designed to give you better information than Vimeo. It’s designed to keep searchers at YouTube longer. More time spent on YouTube means more ads viewed. But you don’t hear people saying, “I can’t believe how much time I wasted on YouTube today! There were just so many well-researched and cogently argued video clips that I couldn’t tear myself away!”
And Amazon’s algorithm is of course designed to sell things. That’s about as far from providing valuable data as you can get.
What’s the solution?
There are plenty of possible solutions. However, many conversations on the subject focus on impossible solutions. A news report included this quote from a medical expert: “It may be necessary for Facebook to pull ‘scurrilous and erroneous sites down completely.’”
Facebook doesn’t have the power to remove websites from the internet, however scurrilous they may be.
All these sites do have the power to change their algorithms. However, algorithms designed to help companies make money may not be interchangeable with algorithms that favor accurate and helpful information.
Focusing on specific topics can be easier. The recent rise of measles outbreaks has increased awareness of anti-vaccination efforts. The World Health Organization has listed “vaccine hesitancy” as one of the top ten global health threats. That allows Pinterest to make a firm decision on the topic, as you can see in the screenshot below:
This is a solution, but it limits search options and yet could still allow plenty of misinformation to seep through.
Facebook is working to send the ball back into the users’ court. By creating more private channels, Facebook has less responsibility for what people say. They may have to take some control of what folks shout in the virtual town square, but the friends and ads you allow into your virtual living room are not their problem.
What can we do?
It’s clear, when you listen to the talk around the virtual water cooler, that some people think of Facebook, Google, and perhaps even Amazon as public utilities.
Being good digital citizens and taking responsibility for our own actions would be a good first step. But we have to admit that many — perhaps most — readers don’t have enough knowledge to be safe from manipulation by people determined to spread misinformation.
And the algorithms are not designed to solve that problem.