The internet is full of misinformation. That’s by design, experts say

The internet is full of misinformation. That’s by design, experts say


Quirks and Quarks19:02Science suggests humans are not built for the information age

This current time period we’re in has been called the Information Age, and it’s easy to see why.

This year, the global amount of data generatedΒ is expected to reach 181 zettabytesΒ β€”Β or 181 trillion gigabytes β€”Β up from just two zettabytes in 2010. Some studies say that there is now more data out there than there are stars in the observable universe.

But all this informationΒ comes at a cost.

“We’re living at a time that I’ve categorized as a knowledge crisis,” says University of Alberta law professorΒ Tim Caulfield.

Β “We have access to more information now than ever before in human history … despite that, we’ve never been more misled, more confused.”

AΒ 2023 Statistics Canada surveyΒ saidΒ 43 per cent of Canadians feel that it’s getting harder to decipher what’s true and what’s fake online β€”Β and that was even before the rise of AI and deepfakes.Β 

Several recent scientific studies have attempted to quantify just how much of this information is actually real.

“Our information environment is completely manipulated, and often people don’t realize the degree to which that is the case,” said Caulfield.

Analyzing TikTok, Amazon, and search engines

In a 2024 studyΒ published in the Journal of Medical Internet Research,Β CaulfieldΒ and his colleaguesΒ looked at the quality of books about cancer on Amazon.

“We foundΒ 49 per centΒ of the books had misleading content in it, and some of it was just completely atrocious,” said Caulfield.

They also found that on the first page of results on Amazon, 70 per cent of the content was misleading.Β 

“And once again, sometimes just hardcore bunk,” he adds.Β 

A phone showing a bogus news article.
This photo shows a Facebook ‘military interest’ page that misrepresented old photos and videos of army operations to falsely claim that Washington was helping its ally Manila prepare for war. It is an example of misinformation that prompts an emotional response to get the reader’s attention. (Jam Sta Rosa/Getty Images)

AΒ study published in March, led by University of British Columbia PhD studentΒ Vasileia Karasavva, took aim at health information presented on TikTok.

The researchersΒ analyzed the top 100 TikTok videos by view countΒ that mentioned ADHD,Β and shared them with clinical psychologists working with ADHD patients, who reported that half the videos contained some sort of misinformation.

“WeΒ sort of saw that a lot of this information didn’t match up withΒ the diagnostic criteria,” said Karasavva.

“They were presenting things that have more to do with normal human behaviour as symptoms of ADHD.”

The team also looked at the creators themselves, and found that half of them stood to make financial gain from this content, posting direct sales links to supposed cures.

Everybody is fighting for your attention, andΒ that makes all of us vulnerable.–Β University of British Columbia psychologist Friedrich GΓΆtz

In another study from March, Tulane University assistant professor Eugina LeungΒ investigatedΒ how results on search engines like Google, BingΒ andΒ ChatGPT differΒ depending on the search terms used.

“What we find is that people tend to use search terms that lean towardΒ what they already believe is true,” said Leung.

“Imagine, we asked participants to search for health effects of caffeine. If they believe that caffeine is quite likely to be harmful, then they’re more likely to come up with search termsΒ like dangers of caffeine, caffeine side effects, caffeine health risks.”Β 

These narrow search terms, Leung said, meanΒ that the users are just receiving results that are tied to their beliefs.

“When we try to search for information online, a lot of times we actually are looking to learn something new,” said Leung.Β 

“With the design of a narrow search engines and also our tendency to come up with a narrow search term, the combination of this means that we’re often not actually learning something new.”

‘We are allΒ susceptible’

A 2024 paper by computer scientist Boleslaw Szymansky, published in the journal Nature Human Behaviour, arguesΒ that we shouldΒ consider our information space as part of our natural environment β€” and acknowledge just how badly it’s being polluted by this “data smog.”

Our inability to decipher what’s true and what’s false online isΒ limiting people’sΒ capacity to evaluate information and make timely decisions, the authors write,Β and cites research that suggests this costs the U.S. economyΒ over a trillion dollars annually due to lost productivity.

“We live in a time where the attention economy is dictating a lot of our experiences. So everybody is fighting for your attention,” says University of British Columbia psychologist Friedrich GΓΆtz.Β “AndΒ that makes all of us vulnerable.”Β 

In his research,Β GΓΆtzΒ wanted to understand who was most susceptible to false information online. In aΒ global studyΒ published in the journalΒ Personality and Individual DifferencesΒ in March,Β GΓΆtzΒ and his team asked overΒ 66,000 participants to discern between fake headlines and real ones.

Most people did poorly.

“The biggest takeaway is that nobody is immune,” he said.Β 

But the study foundΒ certain groups were more susceptible than others. This includes women, people with lower levels of education, people who lean conservative politically, and Gen Z.

He points to education, and specifically the development and practicing of critical thinking skills, as being a defining factor in who fell for fake news more often.

A person is using a tablet and a cell phone, and animated notifications are seen popping out of the phone.
By 2025, the global data sphere is expected to reach 181 zettabytes – or 181 trillion gigabytes – up from just 2 zettabytes in 2010. (Shutterstock / THICHA SATAPITANO)

Caulfield echoes the need for more critical thinking skills to help people navigate information inΒ the attention economy, andΒ suggests something as simple as taking a moment to stop and process information can often help.Β 

“I think that’s because it creates this break between your initial emotional response to the content and it allows, even for thatΒ moment, your rational mind to kick in.”

Researchers also point to other mitigation strategies in the works, such the University of Cambridge’s Bad News GameΒ and other programs that walk people through the mechanics of manipulation. There’s also Concordia University’sΒ SmoothDetector, which harnesses AI and algorithms to parse through the data smog to find misinformation.

“We are now at a stage where I think the interventions could be implemented at scale. They have been tested,” saidΒ GΓΆtz.Β 

“I think where we are at right now is the need for goodwill on part of the powerful actors who could actually help us implement that.”

Leave a Reply

Your email address will not be published. Required fields are marked *