(C) Common Dreams
This story was originally published by Common Dreams and is unaltered.
. . . . . . . . . .



Countering Disinformation Effectively: An Evidence-Based Policy Guide [1]

[]

Date: 2024-12

Key takeaways: Although platforms are neither the sole sources of disinformation nor the main causes of political polarization, there is strong evidence that social media algorithms intensify and entrench these off-platform dynamics. Algorithmic changes therefore have the potential to ameliorate the problem; however, this has not been directly studied by independent researchers, and the market viability of such changes is uncertain. Major platforms’ optimizing for something other than engagement would undercut the core business model that enabled them to reach their current size. Users could opt in to healthier algorithms via middleware or civically minded alternative platforms, but most people probably would not. Additionally, algorithms are blunt and opaque tools: using them to curb disinformation would also suppress some legitimate content. Key sources: Tarleton Gillespie, “Do Not Recommend? Reduction as a Form of Content Moderation,” Social Media + Society 8 (2022): https://journals.sagepub.com/doi/full/10.1177/20563051221117552.

Paul Barrett, Justin Hendrix, and J. Grant Sims, “Fueling the Fire: How Social Media Intensifies U.S. Political Polarization — And What Can Be Done About It,” NYU Stern Center for Business and Human Rights, September 13, 2021, https://bhr.stern.nyu.edu/polarization-report-page?_ga=2.126094349.1087885125.1705371436-402766718.1705371436.

Joshua A. Tucker et al., “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature,” Hewlett Foundation, March 2018, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3144139.

Description and Use Cases

Social media companies employ machine learning algorithms to recommend content to users through curated feeds or other delivery mechanisms. A separate set of algorithms is trained to detect undesirable content, which is then penalized by recommendation algorithms (what some scholars refer to as “reduction”).1 These two processes work together to shape what individuals see on social media.

A major concern is that algorithms contribute to the spread of disinformation on social media because they typically reward engaging content.2 The more users are engaged, the more ad and subscription revenue a company can earn—and unfortunately, disinformation tends to be highly engaging. For instance, a landmark 2018 paper in Science found that human users spread false information more quickly than true information.3 Disinformation tends to be not only misleading but also sensationalist, divisive, and laden with negative emotions—seemingly tailor-made for distribution by engagement-seeking algorithms.4 In 2018, internal Facebook research found that the company’s recommendation algorithm promoted emotionally volatile content.5

Political figures and a range of other actors have opportunistically seized on this dynamic. The term “fake news” was popularized during the 2016 U.S. election, when Macedonians and others responded to the financial incentives of this attention economy by generating viral false news stories for U.S. audiences.6 Scholars and journalists have documented how clickbait pages and public relations firms in other countries use similar strategies to drive engagement and revenue.7 Changes to platform algorithms, though difficult to study, could potentially alter financial and political incentives in ways that constrain the spread of disinformation and increase social resilience to it by reducing polarization. Further, by stopping short of outright removal, reduction can preserve greater freedom of speech while still limiting the impact of disinformation.8

How Much Do We Know?

Very few published studies have directly assessed algorithmic adjustments as a counter-disinformation strategy. However, there is a strong circumstantial case—based on related social science, leaked platform research, and theoretical inferences—that platforms’ existing algorithms have amplified disinformation. It is therefore plausible that changes to these same algorithms could ameliorate the problem.

More research is needed on the extent to which new algorithmic changes can help to slow, arrest, or reverse cycles of disinformation and polarization that are already deeply rooted in some democracies.9 For example, would an algorithmic reduction in divisive content cause some users to migrate to other, more sensational platforms? The answers may change as the social media marketplace continues to evolve. The U.S. market has been dominated for several years by a handful of companies that publicly (albeit imperfectly) embrace a responsibility to moderate their platforms. But major platforms’ cuts in trust and safety budgets, Twitter’s ownership change, and the rise of numerous alternative platforms suggest this moment may be passing.

How Effective Does It Seem?

To be sure, platforms are neither the sole sources of disinformation nor “the original or main cause of rising U.S. political polarization” and similar patterns elsewhere.10 Moreover, what people view on platforms is not just a function of recommendation algorithms. One 2020 study suggested that individuals’ conscious choices to subscribe to YouTube channels or navigate to them via off-site links were a bigger driver of views on extremist videos than algorithmic rabbit holes, for example.11

Nevertheless, there is good reason to believe that social media algorithms intensify and entrench on- and off-platform dynamics which foment disinformation. A 2018 literature review published by the Hewlett Foundation described a self-reinforcing cycle involving social media, traditional media, and political elites.12 It found that when algorithms amplify misleading and divisive content online, political elites and traditional media have greater incentive to generate disinformation and act and communicate in polarizing ways—because the subsequent social media attention can help them earn support and money. In other words, a combination of online and offline dynamics leads to a degradation of the political-informational ecosystem.

If curation algorithms help to foment disinformation, then changes to those algorithms could potentially help to ameliorate the problem, or at least lessen platforms’ contributions to it. Such changes could fall into at least two broad categories. First, platforms could expand their use of algorithms to reduce the prevalence of certain kinds of content through detection and demotion or removal. Second, they could train algorithms to recommend content in pursuit of values other than, or in addition to, engagement.13

When algorithms amplify misleading and divisive content online, political elites and traditional media have greater incentive to generate disinformation and act and communicate in polarizing ways.

Facebook temporarily used the first strategy during the 2020 U.S. election, employing machine learning to demote content more aggressively based on the probability it could incite violence or violate other policies.14 The use of machine learning to assess content in this way is called “classification.”15 It allows platforms to moderate content at scale more efficiently than human review, but it is far from an exact science. Classifiers differ greatly in their precision, so any enhanced reliance on reduction strategies might require improving classifier reliability. Human-made rules and judgments are also still an important factor in reduction strategies. Platform staff must decide what types of content algorithms should search for and weigh the risks of penalizing permissible speech or permitting harmful content to go unmoderated. What level of reliability and statistical confidence should a classification algorithm demonstrate before its judgments are used to demote content? Should algorithms only demote content that might violate explicit terms of service, or should other concepts, such as journalistic quality, play a role? Answers to these questions have important implications for freedom of speech online. Regardless, stronger use of classifiers to demote sensational content could plausibly reduce the spread of disinformation.

Recommendation algorithms could also be made to prioritize values, or “curatorial norms,” other than engagement.16 While the major platforms fixate on maximizing engagement because of their commercial interests, a few smaller platforms design user experiences to promote constructive debate and consensus-building—for example, by requiring users to earn access to features (including private messaging and hosting chat rooms) through good behavior.17 Recommendation algorithms can be used for a similar purpose: a 2023 paper explored how algorithms can “increase mutual understanding and trust across divides, creating space for productive conflict, deliberation, or cooperation.”18 Similarly, others have advocated for drawing on library sciences to create recommendation algorithms that amplify the spread of authoritative content instead of eye-catching but potentially untrustworthy content.19

Another idea is to give users more control over what content they see. Several large platforms now allow users to opt in to a “chronological feed” that lists all followed content in order, with no algorithmic ranking. Others have proposed enabling users to pick their own algorithms. This could be done through middleware, or software that serves as an intermediary between two other applications—in this case, between the user interface and the larger social media platform, allowing users to choose from competing versions of algorithmic recommendation.20 It is difficult to evaluate middleware as a counter-disinformation intervention because it has not been widely implemented. Critics say the recommendation algorithm that many users prefer may well be the most sensationalist and hyperpartisan, not the most measured and constructive.21 Political scientist Francis Fukuyama, perhaps the most prominent advocate for middleware, has said that the goal of middleware is not to mitigate misinformation but to dilute the power of large technology companies in the public square.22

Finally, it should be noted that some users are unusually motivated to seek out and consume extreme, divisive, and inflammatory content. Scholarship holds that these users represent a small fraction of users overall, but they can have outsized political impact, both online and offline.23 These atypical users will likely find their way to the content they desire—through searches, subscriptions, messages with other users, and other means.24

How Easily Does It Scale?

The market viability of algorithmic changes is uncertain. Today’s large platforms grew to their current size by combining data collection, targeted advertising, and personalized systems for distributing compelling—if often troublesome—content. Dramatic departures from this model would have unknown business consequences, but analogous changes to user privacy suggest it could be in the tens of billions of dollars. When Apple made it more difficult to serve targeted advertisements on iOS devices, Meta claimed it cost the company $10 billion in annual revenue. Analysts estimated in 2023 that an EU legal ruling requiring Meta to let users opt out of targeted ads could cost the company between 5 and 7 percent of its advertising revenue, which was $118 billion in 2021.25

It is also unclear that civically oriented platforms, absent public support or heavy-handed regulation, can compete with social media companies which carefully calibrate their services to maximize engagement. Even some proponents of civically oriented platforms have suggested that these are unlikely to scale as well as today’s largest platforms because they often serve communities with specific needs, such as those seeking anonymity or highly attentive content moderation. Such platforms may lack key features many users desire, like the ability to share photos or reply directly to posts.26 Regulations can help level the playing field: for example, the DSA now requires the biggest platforms to provide EU users with a chronological feed option.27

Finally, new algorithms intended to reduce disinformation could be just as opaque as today’s algorithms, while also having unintended consequences for online speech. Removing content through takedowns or other means is relatively apparent to the public, but algorithmic reduction requires complex, back-end changes that are more difficult for external analysts to detect or study. Additionally, efforts to curb the spread of sensationalist or emotionally manipulative content could also sap the online creativity that makes the internet a driver of culture and commerce. Using algorithms to reduce disinformation, and only disinformation, may well be impossible.

Notes

1 Tarleton Gillespie, “Do Not Recommend? Reduction as a Form of Content Moderation,” Social Media + Society 8 (2022): https://journals.sagepub.com/doi/full/10.1177/20563051221117552.

2 Maréchal and Biddle, “It’s Not Just the Content.”

3 Vosoughi, Roy, and Aral, “Spread of True and False News.”

4 Geoff Nunberg, “‘Disinformation’ Is the Word of the Year—and a Sign of What’s to Come,” NPR, December 30, 2019, https://www.npr.org/2019/12/30/790144099/disinformation-is-the-word-of-the-year-and-a-sign-of-what-s-to-come.

5 Jeff Horwitz and Deepa Seetharaman, “Facebook Executives Shut Down Efforts to Make the Site Less Divisive,” Wall Street Journal, May 26, 2020, https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499; and Loveday Morris, “In Poland’s Politics, a ‘Social Civil War’ Brewed as Facebook Rewarded Online Anger,” Washington Post, October 27, 2021, https://www.washingtonpost.com/world/2021/10/27/poland-facebook-algorithm.

6 Samanth Subramanian, “The Macedonian Teens Who Mastered Fake News,” Wired, February 15, 2017, https://www.wired.com/2017/02/veles-macedonia-fake-news.

7 Consider Swati Chaturvedi, I Am a Troll: Inside the Secret World of the BJP’s Digital Army (Juggernaut Books, 2016); and Jonathan Corpus Ong and Jason Vincent A. Cabañes, “Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines,” University of Massachusetts Amherst, Communications Department Faculty Publication Series, 2018, https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1075&context=communication_faculty_pubs.

8 Renée DiResta, “Free Speech Is Not the Same as Free Reach,” Wired, August 30, 2018, https://www.wired.com/story/free-speech-is-not-the-same-as-free-reach.

9 Jennifer McCoy, Benjamin Press, Murat Somer, and Ozlem Tuncel, “Reducing Pernicious Polarization: A Comparative Historical Analysis of Depolarization,” Carnegie Endowment for International Peace, May 5, 2022, https://carnegieendowment.org/2022/05/05/reducing-pernicious-polarization-comparative-historical-analysis-of-depolarization-pub-87034.

10 A review by New York University’s Stern Center for Business and Human Rights found that while “Facebook, Twitter, and YouTube are not the original or main cause of rising U.S. political polarization . . . use of those platforms intensifies divisiveness and thus contributes to its corrosive consequences.” While observers sometimes attribute increased polarization to social media echo chambers, scholars note that most users see a diverse range of news sources. In fact, online heterogeneity might be related to increased polarization: one study based on Twitter data suggests that “it is not isolation from opposing views that drives polarization but precisely the fact that digital media” collapses the cross-cutting conflicts that occur in local interactions, thereby hardening partisan identities and deepening division. (Other, non-digital sources of national news are believed to promote similar dynamics.) And the evidence is not all in one direction: in a study on Bosnia and Herzegovina, abstinence from Facebook use was associated with lower regard for ethnic outgroups, suggesting Facebook could have a depolarizing effect on users. See Paul Barrett, Justin Hendrix, and J. Grant Sims, “Fueling the Fire: How Social Media Intensifies U.S. Political Polarization — And What Can Be Done About It,” NYU Stern Center for Business and Human Rights, September 13, 2021, https://bhr.stern.nyu.edu/polarization-report-page?_ga=2.126094349.1087885125.1705371436-402766718.1705371436; Andrew Guess, Brendan Nyhan, Benjamin Lyons, and Jason Reifler, “Avoiding the Echo Chamber About Echo Chambers,” Knight Foundation, 2018, https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/133/original/Topos_KF_White-Paper_Nyhan_V1.pdf; Petter Törnberg, “How Digital Media Drive Affective Polarization Through Partisan Sorting,” PNAS 119 (2022): https://www.pnas.org/doi/10.1073/pnas.2207159119; Darr, Hitt, and Dunaway, “Newspaper Closures Polarize”; and Nejla Ašimović, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker, “Testing the Effects of Facebook Usage in an Ethnically Polarized Setting,” PNAS 118 (2021): https://www.pnas.org/doi/10.1073/pnas.2022819118.

11 Annie Y. Chen, Brendan Nyhan, Jason Reifler, Ronald E. Robertson, and Christo Wilson, “Subscriptions and External Links Help Drive Resentful Users to Alternative and Extremist YouTube Videos,” Science Advances 9 (2023): https://www.science.org/doi/10.1126/sciadv.add8080.

12 As discussed in case study 1, polarization and its effect on spreading disinformation is not symmetrical across the political spectrum. In most (but not all) political contexts, the right side of the political spectrum has moved further from the mainstream both ideologically and in its media diet, creating more appetite for partisan disinformation. See Joshua A. Tucker et al., “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature,” Hewlett Foundation, March 2018, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3144139.

13 Meta refers to “prevalence” of content on its platform as a measure of how many users see a specific piece of content, a metric which “assumes that the impact caused by violating content is proportional to the number of times that content is viewed.” See “Prevalence,” Meta, November 18, 2022, https://transparency.fb.com/policies/improving/prevalence-metric.

Some companies may contest that “engagement” is the primary motive behind platform algorithms, or they may point to certain forms of engagement as positive for society and therefore an acceptable optimization goal for curation algorithms. Meta, for instance, says that it promotes “meaningful interactions” between users instead of mere clicks. Engagement is used here as a shorthand for the broad goal of motivating users to continue using a service, especially for longer periods of time. See Adam Mosseri, “News Feed FYI: Bringing People Closer Together,” Meta, January 11, 2018, https://www.facebook.com/business/news/news-feed-fyi-bringing-people-closer-together.

14 Shannon Bond and Bobby Allyn, “How the ‘Stop the Steal’ Movement Outwitted Facebook Ahead of the Jan. 6 Insurrection,” NPR, October 22, 2021, https://www.npr.org/2021/10/22/1048543513/facebook-groups-jan-6-insurrection.

15 Robert Gorwa, Reuben Binns, and Christian Katzenbach, “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance,” Big Data & Society 7 (2020): https://journals.sagepub.com/doi/10.1177/2053951719897945.

16 Renée DiResta, “Up Next: A Better Recommendation System,” Wired, April 11, 2018, https://www.wired.com/story/creating-ethical-recommendation-engines.

17 Chand Rajendra-Nicolucci and Ethan Zuckerman, “An Illustrated Field Guide to Social Media,” Knight First Amendment Institute, May 14, 2021, https://knightcolumbia.org/blog/an-illustrated-field-guide-to-social-media.

18 Aviv Ovadya and Luke Thorburn, “Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders, and Governance?,” Knight First Amendment Institute, October 26, 2023, https://knightcolumbia.org/content/bridging-systems.

19 Joan Donovan, “Shhhh... Combating the Cacophony of Content with Librarians,” National Endowment for Democracy, January 2021, https://www.ned.org/wp-content/uploads/2021/01/Combating-Cacophony-Content-Librarians-Donovan.pdf.

20 See Francis Fukuyama, Barak Richman, and Ashish Goel, “How to Save Democracy from Technology: Ending Big Tech’s Information Monopoly,” Foreign Affairs, November 24, 2020, https://www.foreignaffairs.com/articles/united-states/2020-11-24/fukuyama-how-save-democracy-technology; and Francis Fukuyama, Barak Richman, Ashish Goel, Roberta R. Katz, A. Douglas Melamed, and Marietje Schaake, “Middleware for Dominant Digital Platforms: A Technological Solution to a Threat to Democracy,” Stanford Cyber Policy Center, accessed March 13, 2023, https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/cpc-middleware_ff_v2.pdf.

21 See “The Future of Platform Power,” various authors, Journal of Democracy 32 (2021): https://www.journalofdemocracy.org/issue/july-2021/.

22 Katharine Miller, “Radical Proposal: Middleware Could Give Consumers Choices Over What They See Online,” Stanford University Human-Centered Artificial Intelligence, October 20, 2021, https://hai.stanford.edu/news/radical-proposal-middleware-could-give-consumers-choices-over-what-they-see-online.

23 Consider, for example, Ronald E. Robertson, “Uncommon Yet Consequential Online Harms,” Journal of Online Trust & Safety 1 (2022): https://tsjournal.org/index.php/jots/article/view/87.

24 Ronald E. Robertson et al., “Users Choose to Engage With More Partisan News than They Are Exposed to on Google Search,” Nature 618 (2022): https://www.nature.com/articles/s41586-023-06078-5.

25 Adam Satariano, “Meta’s Ad Practices Ruled Illegal Under E.U. Law,” New York Times, January 4, 2023, https://www.nytimes.com/2023/01/04/technology/meta-facebook-eu-gdpr.html.

26 Rajendra-Nicolucci and Zuckerman, “Illustrated Field Guide.”

27 Natasha Lomas, “All Hail the New EU Law That Lets Social Media Users Quiet Quit the Algorithm,” TechCrunch, August 25, 2023, https://techcrunch.com/2023/08/25/quiet-qutting-ai.

[END]
---
[1] Url: https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guide?lang=en

Published and (C) by Common Dreams
Content appears here under this condition or license: Creative Commons CC BY-NC-ND 3.0..

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/commondreams/