• Technika
  • Elektrické zařízení
  • Materiálový průmysl
  • Digitální život
  • Zásady ochrany osobních údajů
  • Ó jméno
Umístění: Domov / Technika / Misinformation on Facebook got six times more clicks than factual news during the 2020 election, study says

Misinformation on Facebook got six times more clicks than factual news during the 2020 election, study says

techserving |
1276

correction

An earlier version of this story mischaracterizedas a non-profit one of the organizations whose work the NYU study relied on. NewsGuard is a for-profit company that helps advertisers avoid having their advertisements appear on misinformation and hoax sites. This version has been corrected.

A new study of user behavior on Facebook around the 2020 election is likely to bolster critics’ long-standing arguments that the company’s algorithms fuel the spread of misinformation over more trustworthy sources.

Get the full experience.Choose your plan

The forthcoming peer-reviewed study by researchers at New York University and the Université Grenoble Alpes in France has found that from August 2020 to January 2021, news publishers known for putting out misinformation got six times the amount of likes, shares, and interactions on the platform as did trustworthy news sources, such as CNN or the World Health Organization.

Ever since “fake news” on Facebook became a public concern following the 2016 presidential election, publishers who traffic in misinformation have been repeatedly shown to be able to gain major audiences on the platform. But the NYU study is one of the few comprehensive attempts to measure and isolate the misinformation effect across a wide group of publishers on Facebook, experts said, and its conclusions support the criticism that Facebook’s platform rewards publishers that put out misleading accounts.

AdvertisementStory continues below advertisement

The study “helps add to the growing body of evidence that, despite a variety of mitigation efforts, misinformation has found a comfortable home — and an engaged audience — on Facebook,” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University, who reviewed the study’s findings.

In response, Facebook said that the report measured the number of people who engage with content, but that is not a measure of the number of people that actually view it (Facebook does not make the latter number, called impressions, publicly available to researchers).

“This report looks mostly at how people engage with content, which should not be confused with how many people actually see it on Facebook,” said Facebook spokesman Joe Osborne. "When you look at the content that gets the most reach across Facebook, it is not at all like what this study suggests.”

AdvertisementStory continues below advertisement

He added that the company has 80 fact checking partners covering over 60 languages that work to label and reduce the distribution of false information.

Misinformation on Facebook got six times more clicks than factual news during the 2020 election, study says

The study’s authors relied on categorizations from two organizations that study misinformation, NewsGuard and Media Bias/Fact Check. Both groups have categorized thousands of Facebook publishers by their political leanings, ranging from far left to far right, and by their propensity to share trustworthy or untrustworthy news. The team then took 2,551 of these pages and compared the interactions on posts on pages by publishers known for misinformation, such as the left-leaning Occupy Democrats and the right-leaning Dan Bongino and Breitbart, to interactions on posts from factual publishers.

The researchers also found that the statistically significant misinformation boost is politically neutral — misinformation-trafficking pages on both the far left and the far right generated much more engagement from Facebook users than factual pages of any political slant. But publishers on the right have a much higher propensity to share misleading information than publishers in other political categories, the study found. The latter finding echoes the conclusions of other researchers, as well as Facebook’s own internal findings ahead of the 2018 midterm elections, according to Washington Post reporting.

Only Facebook knows the extent of its misinformation problem. And it’s not sharing, even with the White House.

Rafael Rivero, co-founder and president of Occupy Democrats, said he was curious about the methodology in the report and disputed the idea that the page spreads misinformation. “We occasionally get small things wrong — and immediately issue corrections — but we would never deliberately mislead our readers,” he added in an emailed statement.

AdvertisementStory continues below advertisement

In a Twitter message, Bongino said, “Accusing me in your piece of ‘misinformation’ (without citing a single credible example) while working for a ‘newspaper’ that promoted the ridiculous pee-pee hoax, is peak ‘journalism’ in 2021. I’d say you should be ashamed, but working for the Washington Post renders you incapable of shame.”

Breitbart did not respond to requests for comment.

Facebook’s critics have long charged that misleading, inflammatory content that often reinforces the viewpoints of its viewers generates significantly more attention and clicks than mainstream news.

That claim — which has been reiterated by members of Congress as well as by Silicon Valley engineers in films such as “The Social Dilemma” — had gained significant traction during the pandemic. Conspiracy theories about covid-19 and vaccines, along with misleading information about treatments and cures, have gone viral, and may have influenced the views of large numbers of Americans. A recent survey by the COVID States Project found that U.S. Facebook users were less likely to be vaccinated any other type of news consumer, even consumers of right-leaning Fox News.

AdvertisementStory continues below advertisement

President Biden upped the ante in July when he said covid-related misinformation on platforms such as Facebook was “killing people,” a comment he later walked back.

But there has been little hard data to back up the assertions about the harm caused by Facebook’s algorithms, in part because Facebook has limited the data that researchers can access, Tromble said.

In 2018, an MIT study of misleading stories on Twitter — a platform whose content, unlike Facebook’s, is largely public — found that they performed better among Twitter users than factual stories. Other studies have found that engagement with misinformation is not as widespread as people might think, and that the people who consume and spread misinformation tend to be small numbers of highly motivated partisans.

Facebook shared new data about what’s popular on its platform. The answers are deeply weird.

Facebook is also increasingly restricting access to outside groups that make attempts to mine the company’s data. In the past several months, the White House has repeatedly asked Facebook for information about the extent of covid misinformation on the platform, but the company did not provide it.

AdvertisementStory continues below advertisement

One of the researchers Facebook has clamped down on was the NYU researcher, Laura Edelson, who conducted the study. The company cut off Edelson and her colleagues’ accounts last month, arguing that her data collection — which relied on users voluntarily downloading a software widget that allows researchers to track the ads that they see — put Facebook potentially in violation of a 2019 U.S. Federal Trade Commission privacy settlement.

The commission, in a rare rebuttal, shot back that the settlement makes exceptions for researchers and that Facebook should not use it as an excuse to deny the public the ability to understand people’s behavior on social networks.

Edelson noted that because Facebook stopped her project, called the NYU Ad Observatory, last month, she would not be able to continue to study the reach and impact of misinformation on the platform.

AdvertisementStory continues below advertisement

In response to criticism that it is becoming less transparent, Facebook recently published a new transparency report that shows the most popular content on the platform every quarter. But the report is highly curated, and Facebook censored an earlier version of the report out of concerns that it would generate bad press, according to a person familiar with the discussions who spoke on the condition of anonymity to describe sensitive conversations. That led critics to argue that the company was not being transparent.

One of the reasons it is hard to tell how much exposure people have to misinformation on Facebook in particular is because so much content is shared in private groups, Tromble said.

Analysis: People are more anti-vaccine if they get their covid news from Facebook than from Fox News, data shows

To conduct the study, Edelson’s team used a Facebook-owned business analytics tool called CrowdTangle to conduct the analysis. The tool is often used by journalists and researchers to track the popularity of posts. But CrowdTangle has limitations as well: The tool shares how many likes and shares a particular post received, but does not disclose what are known as impressions, or how many people saw the post.

AdvertisementStory continues below advertisement

Edelson said the study showed that Facebook algorithms were not rewarding partisanship or bias, or favoring sites on one side of the political spectrum, as some critics have claimed. She said that Facebook amplifies misinformation because it does well with users, and the sites that happen to have more misinformation are on the right. Among publishers categorized as on the far right, those that share misinformation get a majority — or 68 percent — of all engagement from users.