A Perfect Storm for Conspiracies: Confirmation Bias & Algorithms

Aidan Hotte
9 min readJan 13, 2021

--

Source: Brandon Raygo

To start things off, this article will not be about why people believe the conspiracies. That is a topic for a completely separate article. Instead I will be focusing on why conspiracies have been so extremely prominent and widespread this year.

There’s no denying that 2020 was the year of the conspiracy. Since the COVID-19 pandemic started there have been plenty of “skeptics”, alleging a man-made virus or no virus at all. As the year progressed we witnessed the rise of QAnon, a disproven far-right conspiracy that suggests a satanic cabal within the Democratic party that has believers in important republican positions. To top things off, 2020 finished with Joe Biden winning the US Presidential election which spawned countless unfounded conspiracies about election fraud. Conspiracies, especially about the election, have gone so mainstream that news outlets such as FOX and even major politicians (the President) have begun to peddle them as a sort of twisted political strategy.

The COVID-19 Pandemic sparked lockdowns across the globe - Source: Obi Onyeador on Unsplash

So why were conspiracy theories so prominent this year?

There are plenty of potential explanations, with the simplest one being that many more people were stuck inside with nothing to do than usual, which in turn could have led to some people seeking out conspiracy theories as an activity to occupy themselves. However, I believe that there is a little more than just monotony at fault. These conspiracies were circulated and strengthened by two things working in tandem: the psychological concept of “Confirmation Bias”, and social media algorithms.

Want to read this story later? Save it in Journal.

Confirmation Bias - “People are prone to believe what they want to believe.”

Source: capital.com

The term “Confirmation Bias” was coined by English psychologist Peter Wason, meaning “a tendency to search for or interpret information in a way that confirms one’s preconceptions,” (ScienceDaily). To truly put this situation in the context of confirmation bias, I’ll need to explain the concept first. A great way to do this is to provide an example, and a perfect one arises in a 1979 Stanford University study.

The Stanford University Psychology Building - Source: Kate Chesley

In this study, two groups of students were brought in: one group supported the death penalty and the other was against it. The students were then given two different studies to respond to. These studies contained equally compelling research and data as to why each side was right. For example, one study said that the death penalty stopped crime while the other said that there was actually no effect on crime. The twist, as there usually is with these types of studies, was that neither of the studies contained any real information. The contents were specifically designed to be equally realistic and convincing.

After all the responses were collected from the two groups, the results were clear and fascinating: the students that were supportive of the death penalty found that the evidence supporting their view was highly credible while the data that went against their belief was not, with the same being true about the group that was against the death penalty.

Both sides were then given a survey after reading the two studies, and both sides responded that they were actually more convinced that they were right even after seeing evidence that went against their views.

The conclusion of this experiment is one that solidifies the idea of confirmation bias: more often than not, we choose what we want to believe and then seek facts to support that belief. Both groups of students that were studied were given equally falsified data, and yet they still said that the data that supported their claims seemed very credible.

This study was exemplary proof that we humans are firmly set in our ways, and are often dangerously stubborn when it comes to changing our minds or accepting new ideas as reality.

The Death Penalty was — and still is — a divisive issue in the US, making it a great topic for the study - Source: Gallup Polls

This leads to the question; why are we like this? Well, the human mind is a mystery but this may very well be an answerable question. Hundreds of years ago, societies did not function off of internet or television. They quite often had one source of information, if any at all. Therefore, our ancestors would’ve had little cause to doubt the information they read or heard.

However, fast forward to our generation of internet, social media and partisan television and there is a lot more that we should be doubtful of. Unfortunately though, we have not had enough time to adapt and most people that were alive before the internet when there were only a few major sources of credible information are still taking everything they see as “news” as real. “Fake-news” as we know it today is a very recent concept for a lot of people, and this new phenomenon is made much worse when it is talking about something that certain people already believe to be true.

Confirmation bias comes in to play in the modern age when someone hears two conflicting stories. For instance, if someone that works for a coal company reads one article about how coal emissions are harming our planet and another that says they aren’t, even if the second article isn’t credible at all they will still be much more likely to believe that it is the correct information.

Confirmation bias itself isn’t a new concept (for example, the concept was mentioned in Dante’s “The Divine Comedy” in 1320), but it has only been mainstream for a few decades. When the Stanford study released its findings, it was a shocking result. Today, no one would bat an eye. Confirmation bias is something that we as humans need to evolve to overcome as we can see, especially in 2020, its results can be devastating.

Algorithms: “What’s in your filter bubble depends on who you are, and it depends on what you do. But you don’t decide what gets in.” (Eli Pariser)

Simplistic depiction of a filter bubble.

With social media becoming exponentially a greater part of our lives, its impact on our societies grows in lockstep. In 2020 over 3.6 billion (half of the world’s population) used social media. With this wide of a reach, there are so many opportunities for social media platforms like Instagram or Facebook to unify and help foster relations between people or groups that normally would never have even seen each other in their entire lifetimes. Unfortunately, these platforms have massive flaws which, in turn, are spread to their billions of users.

One of these such flaws is the concept of ‘filter bubbles’. ‘Filter bubbles’, the term coined by internet activist Eli Pariser, is a phrase to describe the way that social media platforms show users content that confirms and supports their prior opinions which then fuels partisanship and internet tribalism. This amplifies online confirmation bias, as it ends up only showing users content they explicitly agree with while simultaneously excluding things they might not agree with.

The way these bubbles work is by taking in all of a user’s personal data, which could be who they follow, what they like, or what they search, and putting it into an algorithm to determine what the user’s interests and opinions are. It makes an assumption based on the data, and then shows the user the content that matches the algorithm’s assumption.

This process of filtering things in and out of a user’s feed is constant and continuous, and changes with each like or follow. This type of algorithm is commonly in use in the large social media such as Google’s ‘Personalized Search Results’, FaceBook’s ‘Personalized News Stream’, and especially in TikTok’s ‘For You’ page as that is a stream of content explicitly based on the user’s interests and activity.

Relative newcomer Tiktok stands out among social media platforms with their “For You Page”, which uses algorithms to show users content related to them. - Source: WeAreSocial & Hootsuite

The filter bubble is a widely-recognized concept in the tech world. Even big tech CEOs have acknowledged its existence, such as Bill Gates from Microsoft. As Gates said,

[Technology such as social media] lets you go off with like-minded people, so you’re not mixing and sharing and understanding other points of view … It’s super important. It’s turned out to be more of a problem than I, or many others, would have expected” (Bill Gates).

While Microsoft doesn’t have any major social media platforms, having someone so ingrained in the world of technology speak about this issue really solidifies its credibility.

The Perfect Storm: “It is easier to fool people than to convince them they have been fooled.”

Despite lack of evidence, most Republicans believe election fraud occurred. - Source: FiveThirtyEight

In this new social media world, we spend 2 hours and 24 minutes each day ingesting content on various platforms. It is within this duration that both points congregate to create our current state of affairs. It is quite apparent that the majority of recent political conspiracy theories stem from the Republican side of things (although there are definitely Democrat conspiracies). This detail, however small, is crucial to understanding the rise of conspiracy theories from fringe concepts to popular ideas. Let me describe to you this process:

A regular person, that happens to be Republican, wants to watch the news. As to the concept of confirmation bias, they are drawn to a right-leaning broadcaster such as Fox News as an example. With a newfound trust in a news organization, they begin to follow Fox and a few of its reporters on Facebook or Twitter.

So far nothing has quite gone wrong, as there is nothing inherently wrong with Fox news.

However, where things begin to take a turn for the worse, is when this person’s social media algorithms begin to fine-tune their feed to match the person’s seemingly right-wing views. This person then is continuously shown more pro-republican and anti-democrat content than before, and through more subconscious confirmation bias the person believes it all to be true.

After all, how could it be falsified if deep down they already believe it?

As our example user is shown increasingly more Republican-centred content the algorithm will begin to snowball, beginning to not only show more content, but more extreme content as well as it tries to adjust for the sudden large consumption of conservative news that itself has created. After the mainstream, still somewhat credible, news sources are deemed not what the user wants, it will start to show them content from far-right “news” pages.

These pages, under the guise of news, are mainly conspiracy theory hubs with no actual news provided. However, since the process of changing the filter bubble has happened gradually, this person still views the far-right content as credible and correct because it fits their views. This in turn leads to belief, and then propagation of conspiracies that they see online and incorrectly believe to be fact.

QAnon is a far-right online conspiracy theory, the type that may be peddled to republicans online as “news” - Source: Kyle Grillot via Getty Images

In closing:

There are countless reasons why people believe conspiracies. However these, in my opinion, are not what is important. It is not important why people believe conspiracies, as that is something that is unchangeable. What is important is how these conspiracies are spread and passed off as any other news. It is also something that we can change.

Although we won’t be able to keep people from letting confirmation bias impact their judgements, it is possible to change the way algorithms are used in search engines to prevent any type of radicalizations to occur. This can include not adding political parameters to the algorithm at all, so that everyone is exposed to opinions that differ from theirs from a variety of sources.

Another thing social networks can do is to get serious about policing the use of fake facts on their platforms. Freedom of speech is an important basic human right, but it has its limits. Speech must respect the rights and reputation of others, and must not infringe on public safety. There is no freedom of speech on defamation or libel. We would not have any of these current problems with conspiracies and damaging false information if social media made it widely banned to knowingly promote fake-news.

For now though, we can only try to raise awareness and wait anxiously for something of actual substance to be done to address this rapidly spreading pandemic: the conspiracy.

Works Cited: https://docs.google.com/document/d/1QYy1-8eydnSQAkeCGmJxuV-eKF8ceK0YgZba2zcy2Iw/edit?usp=sharing

My name is Aidan. I’m a 19 year old Canadian student who is interested in all things Political, Legal, and Musical. Follow me on Medium and my other accounts if you enjoyed the article!

LinkedIn: https://www.linkedin.com/in/aidan-hotte-979b06182/

Twitter: https://twitter.com/AidanHotte

YouTube: https://www.youtube.com/channel/UCMM5fGocLWw3p2YB3ZpWbnw

--

--