Little rewards get people to see truth in politically unfavorable info

a gavel hammers on a chat text bubble

Piecing together why so many people are willing to share misinformation online is a major focus among behavioral scientists. It’s easy to think partisanship is driving it all—people will simply share things that make their side look good or their opponents look bad. But the reality is a bit more complicated. Studies have indicated that many people don’t seem to carefully evaluate links for accuracy, and that partisanship may be secondary to the rush of getting a lot of likes on social media. Given that, it’s not clear what induces users to stop sharing things that a small bit of checking would show to be untrue.

So, a team of researchers tried the obvious: We’ll give you money if you stop and evaluate a story’s accuracy. The work shows that small payments and even minimal rewards boost the accuracy of people’s evaluation of stories. Nearly all that effect comes from people recognizing stories that don’t favor their political stance as factually accurate. While the cash boosted the accuracy of conservatives more, they were so far behind liberals in judging accuracy that the gap remains substantial.

Money for accuracy

The basic outline of the new experiments is pretty simple: get a bunch of people, ask them about their political leanings, and then show them a bunch of headlines as they would appear on a social media site such as Facebook. The headlines were rated based on their accuracy (i.e., whether they were true or misinformation) and whether they would be more favorable to liberals or conservatives.

Consistent with past experiments, the participants were more likely to rate headlines that favored their political leanings as true. As a result, most of the misinformation rated as true came about because people liked how it was consistent with their political leanings. While this is true for both sides of the political spectrum, conservatives were significantly more likely to rate misinformation as true—an effect seen so often that the researchers cite seven different papers as having shown it previously.

On its own, this sort of replication is useful but not very interesting. The interesting stuff came when the researchers started varying this procedure. And the simplest variation was one where they paid participants a dollar for every story they correctly identified as true.

In news that will surprise no one, people got better at accurately identifying when stories weren’t true. In raw numbers, the participants got an average of 10.4 accuracy ratings (out of 16) right in the control condition but over 11 out of 16 right when payment was involved. This effect also showed up when, instead of payment, participants were told researchers would give them an accuracy score when the experiment was done.

The most striking thing about this experiment was that nearly all the improvement came when people were asked to rate the accuracy of statements that favored their political opponents. In other words, the reward caused people to be better about recognizing the truth in statements that, for political reasons, they’d prefer to think weren’t true.

A smaller gap, but still a gap

The opposite was true when the experiment was shifted, and people were asked to identify stories that their political allies would like. Here, accuracy dropped. This suggests that the participants’ frame of mind played a large role, as incentivizing them to focus on politics caused them to have a lower focus on accuracy. Notably, the effect was roughly as large as a financial award.

The researchers also created a condition where the users weren’t told the source of the headline, so they couldn’t identify if it came from partisan-friendly media. This didn’t make any significant difference to the results.

As noted above, conservatives are generally worse at this than liberals, with the average conservative getting 9.3 out of 16 right and the typical liberal getting 10.9. Both groups see their accuracy go up when there are incentives, but the effects are larger for conservatives, raising their accuracy to an average of 10.1 right out of 16. But, while this is significantly better than they do when there’s no incentive, it’s not as good as liberals do when there’s no incentive.

So, while it looks like some of the problems with conservatives sharing misinformation is due to a lack of motivation for getting things right, this only explains part of the effect.

The research team suggests that, while a payment system will probably be impossible to scale, the fact that an accuracy score had roughly the same impact could mean that this points to a way for social networks to cut down on the misinformation their users spread. But this seems naive.

Fact-checkers were initially promoted as a way of cutting down on misinformation. But, consistent with these results, they tended to rate more of the pieces shared by conservatives as misinformation and eventually ended up labeled as biased. Similarly, attempts to limit the spread of misinformation on social networks have seen the heads of those networks accused of censoring conservatives at Congressional hearings. So, even if it works in these experiments, it’s likely that any attempt to roll out a similar system in the real world will be very unpopular in some quarters.

Nature Human Behaviour, 2023. DOI: 10.1038/s41562-023-01540-w  (About DOIs).

https://arstechnica.com/?p=1923411