The vast majority of people these days use some form of social media, but some develop what’s known as problematic social media use (PSMU). It’s not yet deemed a clinical addiction, but it does share some symptoms with addiction and substance abuse disorders. And according to a new paper published in the journal PLoS ONE, someone who exhibits PSMU is also more likely to believe in—and share—fake news online, contributing to the rampant spread of misinformation that is the bane of the 21st-century Internet.
“If someone struggles with a substance dependency, it’s the decision-making process in their brain where they have difficulties stopping,” co-author Dar Meshi of Michigan State University told Ars. “They take their drug and have a negative outcome: get a DUI or crash their car. Most people learn from a bad outcome and don’t do it again, but someone with a substance use disorder continues to do that action.”
In the case of PSMU, someone might feel bad if they are unable to access social media for an extended period (withdrawal), or their use of social media might lead to losing a job, poor grades, or mental health issues.
Meshi specializes in risky decision-making, impulsivity, and PSMU; his co-author and MSU colleague Maria Molina researches misinformation and disinformation. The two were chatting one day, and Meshi mentioned that he’d found in his research that problematic social media users were typically more impulsive and took more risks than average. He thought there might be an interesting link.
Perhaps people with PSMU might also be more likely to engage with, or believe in and propagate, online misinformation “because their risk evaluation is a little bit different than a neurotypical person,” he said. (Misinformation is fake or false news that is unintentionally distributed; disinformation is when it is intentionally spread, explicitly to deceive.)
Their study looked at subjects’ propensity to believe fake news by measuring actions, such as clicking on a link or liking, sharing, or commenting on posts. Meshi and Molina recruited 189 college students who completed a questionnaire about their social media habits.
So, the researchers decided to see if they could upend it.
Keeping it general
The simplest way to change the dynamics of this was simply to change the results returned by the search. So, the researchers did a number of experiments where they gave all of the participants the same results, regardless of the search terms they had used. When everybody gets the same results, their opinions after reading them tend to move in the same direction, suggesting that search results can help change people’s opinions.
The researchers also tried giving everyone the results of a broad, neutral search, regardless of the terms they’d entered. This weakened the probability that beliefs would last through the process of formulating and executing a search. In other words, avoiding the sorts of focused, biased search terms allowed some participants to see information that could change their minds.
Despite all the swapping, participants continued to rate the search results relevant. So, providing more general search results even when people were looking for more focused information doesn’t seem to harm people’s perception of the service. In fact, Leung and Urminsky found that the AI version of Bing search would reformulate narrow questions into more general ones.
That said, making this sort of change wouldn’t be without risks. There are a lot of subject areas where a search shouldn’t return a broad range of information—where grabbing a range of ideas would expose people to fringe and false information.
Nevertheless, it can’t hurt to be aware of how we can use search services to reinforce our biases. So, in the words of Leung and Urminsky, “When search engines provide directionally narrow search results in response to users’ directionally narrow search terms, the results will reflect the users’ existing beliefs, instead of promoting belief updating by providing a broad spectrum of related information.”
Enlarge / Daniel Simons and Christopher Chabris are the authors of Nobody’s Fool: Why We Get Taken In and What We Can Do About It.
Basic Books
There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: A conversation with psychologists Daniel Simons and Christopher Chabris on the key habits of thinking and reasoning that may serve us well most of the time, but can make us vulnerable to being fooled.
It’s one of the most famous experiments in psychology. Back in 1999, Daniel Simons and Christopher Chabris conducted an experiment on inattentional blindness. They asked test subjects to watch a short video in which six people—half in white T-shirts, half in black ones—passed basketballs around. The subjects were asked to count the number of passes made by the people in white shirts. Halfway through the video, a person in a gorilla suit walked into the midst of the players and thumped their chest at the camera before strolling off-screen. What surprised the researchers was that fully half the test subjects were so busy counting the number of basketball passes that they never saw the gorilla.
The experiment became a viral sensation—helped by the amusing paper title, “Gorillas in Our Midst“—and snagged Simons and Chabris the 2004 Ig Nobel Psychology Prize. It also became the basis of their bestselling 2010 book, The Invisible Gorilla: How Our Intuitions Deceive Us. Thirteen years later, the two psychologists are back with their latest book, published last July, called Nobody’s Fool: Why We Get Taken In and What We Can Do About It. Simons and Chabris have penned an entertaining examination of key habits of thinking that usually serve us well but also make us vulnerable to cons and scams. They also offer some practical tools based on cognitive science to help us spot deceptions before being taken in.
“People love reading about cons, yet they keep happening,” Simons told Ars. “Why do they keep happening? What is it those cons are tapping into? Why do we not learn from reading about Theranos? We realized there was a set of cognitive principles that seemed to apply across all of the domains, from cheating in sports and chess to cheating in finance and biotech. That became our organizing theme.”
Ars spoke with Simons and Chabris to learn more.
Ars Technica: I was surprised to learn that people still fall for basic scams like the Nigerian Prince scam. It reminds me of Fox Mulder’s poster on The X-Files: “I want to believe.“
Daniel Simons: The Nigerian Prince scam is an interesting one because it’s been around forever. Its original form was in letters. Most people don’t get fooled by that one. The vast majority of people look at it and say, this thing is written in terrible grammar. It’s a mess. And why would anybody believe that they’re the one to recover this vast fortune? So there are some people who fall for it, but it’s a tiny percentage of people. I think it’s still illustrative because that one is obviously too good to be true for most people, but there’s some small subset of people for whom it’s just good enough. It’s just appealing enough to say, “Oh yeah, maybe I could become rich.”
There was a profile in the New Yorker of a clinical psychologist who fell for it. There are people who, for whatever reason, are either desperate or have the idea that they deserve to inherit a lot of money. But there are a lot of scams that are much less obvious than that one, selecting for the people who are most naive about it. I think the key insight there is that we tend to assume that only gullible people fall for this stuff. That is fundamentally wrong. We all fall for this stuff if it’s framed in the right way.
Christopher Chabris: I don’t think they’re necessarily people who always want to believe. I think it really depends on the situation. Some people might want to believe that they can strike it rich in crypto, but they would never fall for a Nigerian email or, for that matter, they might not fall for a traditional Ponzi scheme because they don’t believe in fiat money or the stock market. Going back to the Invisible Gorilla, one thing we noticed was a lot of people would ask us, “What’s the difference between the people who noticed the gorilla and the people who didn’t notice the gorilla?” The answer is, well, some of them happened to notice it and some of them didn’t. It’s not an IQ or personality test. So in the case of the Nigerian email, there might’ve been something going on in that guy’s life at that moment when he got that email that maybe led him to initially accept the premise as true, even though he knew it seemed kind of weird. Then, he got committed to the idea once he started interacting with these people.
Christopher Chabris
So one of our principles is commitment: the idea that if you accept something as true and you don’t question it anymore, then all kinds of bad decisions and bad outcomes can flow from that. So, if you somehow actually get convinced that these guys in Nigeria are real, that can explain the bad decisions you make after that. I think there’s a lot of unpredictableness about it. We all need to understand how these things work. We might think it sounds crazy and we would never fall for it, but we might if it was a different scam at a different time.