(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .



Debriefing works: Successful retraction of misinformation following a fake news study [1]

['Ciara M. Greene', 'School Of Psychology', 'University College Dublin', 'Dublin', 'Gillian Murphy', 'School Of Applied Psychology', 'University College Cork', 'Cork']

Date: 2023-03

In recent years there has been an explosion of research on misinformation, often involving experiments where participants are presented with fake news stories and subsequently debriefed. In order to avoid potential harm to participants or society, it is imperative that we establish whether debriefing procedures remove any lasting influence of misinformation. In the current study, we followed up with 1547 participants one week after they had been exposed to fake news stories about COVID-19 and then provided with a detailed debriefing. False memories and beliefs for previously-seen fake stories declined from the original study, suggesting that the debrief was effective. Moreover, the debriefing resulted in reduced false memories and beliefs for novel fake stories, suggesting a broader impact on participants’ willingness to accept misinformation. Small effects of misinformation on planned health behaviours observed in the original study were also eliminated at follow-up. Our findings suggest that when a careful and thorough debriefing procedure is followed, researchers can safely and ethically conduct misinformation research on sensitive topics.

Introduction

The increasing reliance of many on internet sources, including social media, for news and information has led to concerns about the prevalence of online misinformation. The term “fake news” came into use in 2016, and can be used to mean anything from intentionally disseminated falsehoods to inaccuracies in descriptions of news events [1]. The use of the term in academic research is disputed (with some preferring “false news” or “fabricated news”), but many researchers have settled on the definition provided by Lazer et al. [2], that fake news is “fabricated information that mimics news media content in form but not in organizational process or intent”. Discussions around the spread of fake news often allude to concerns that exposure to online misinformation might have significant consequences for public health or democratic institutions. This concern has been magnified with the onset of the COVID-19 pandemic and the associated “infodemic” [3–6]. As a result, a large body of research has investigated the effect of fake news and misinformation on participants’ memories, beliefs, attitudes and behaviours. The rise of this research field brings with it an obligation to establish whether experimentally-presented misinformation can be successfully retracted, and its influence eliminated.

Consequences of misinformation exposure Years of research have demonstrated that misinformation exposure can result in false or distorted memories; for example, when an eyewitness’s memory of a crime is influenced by a leading question [7], or when a participant is induced to remember a childhood event that never took place [8–10]. Similar observations have been made with respect to online misinformation, with various reports of false memories for fabricated events described in “fake news” articles [11–14]. However, probably the most oft-repeated concern with respect to fake news is the potential for misinformation to directly affect real-world behaviour. Exposure to misinformation in a laboratory setting can influence behaviour: for example, a body of research has examined the consequences of tricking participants into believing that they once became sick after eating a particular food [15–17]. In many cases, participants came to believe in or even remember this fictional event, and showed a subsequent unwillingness to eat that food when it was offered to them. A plethora of studies over the last decade have investigated participants’ belief in and willingness to share fake news (see [18] for a review); more recently, researchers have attempted to directly investigate its impact on behaviour. One study investigated effects of political misinformation exposure on voting behaviour, but the researchers were only able to measure effects at the municipal level by comparing the proportion of votes cast for populist parties [19]. The COVID-19 pandemic has awoken new interest in this topic, amid fears that misinformation might affect vaccine uptake or adherence to public health guidelines. Some research has suggested that anti-vaccination misinformation leads to vaccine hesitancy and reduced vaccination intentions [20, 21]. Others showed no effect of vaccine misinformation, even following multiple exposures to fake news headlines [22, 23]. In a large study of COVID-19 misinformation, Greene & Murphy recently reported that effects of a single exposure to a fabricated news story had small effects on subsequent behavioural intentions—for example, reading a story about privacy concerns with a forthcoming contact tracing app reduced intentions to download that app by about 5% [22]. What’s more, this study reported that participants who formed a false memory for the events described in the story experienced stronger effects on behaviour than those who simply saw the fake story but did not remember the events.

Debunking and warnings The potential for long-term harm arising from misinformation and fake news has led to the development of a variety of methods of reducing its impact. These methods generally fall into four categories: 1) specific debunkings or fact checks, in which a piece of misinformation to which participants have already been exposed is subsequently explained to be false (see [24] for a meta-analysis); 2) the use of specific warnings, in which false items are prefaced or accompanied by a warning label advising participants that the information they are about to read is inaccurate or disputed [25–28]; 3) efforts to ‘nudge’ news consumers into a more analytical frame of mind, for example by encouraging them to consider accuracy ([29, 30]), and 4) preventative measures in which researchers attempt to inoculate participants against future exposures to misinformation. This category includes gamified interventions designed to teach participants about online misinformation to help them detect it in future [31, 32], and generic warnings about the presence of misinformation, intended to increase participants’ tendency to monitor information more carefully. This last method is cheap and easy to implement, and is therefore the approach often used by governments or social media companies, who advise news consumers to “watch out for bad information” or “be media-smart” [33, 34]. Nevertheless, there is a stark lack of research addressing the effectiveness of these generic warnings. What research there is suggests that this approach may only be effective if it explicitly alludes to the information about to be presented. For example, Clayton et al. [35] presented participants with a general warning prior to exposure to misinformation that included the text, “you will be asked to evaluate the accuracy of some news headlines shared on social media. Although some of these stories may be true, others may be misleading”, and encouraged participants to be sceptical when reading the news headlines. Clayton et al. reported that this warning slightly reduced the perceived accuracy of the headlines. Greene & Murphy [22] went a step further and presented participants with generic warnings about misinformation that were not explicitly linked with the subsequently presented information, and found that they did not reduce acceptance of the misinformation—regardless of whether the warning was framed in positive or negative terms.

Retracting misinformation: The role of debriefing When misinformation is presented in an experimental context, researchers have an ethical obligation to retract that misinformation at the end of the procedure [36]. This is particularly important if the information has the potential to be harmful, for example, by suggesting that an alternative medicine might be an effective treatment for a disease. The extent to which misinformation can continue to exert effects on participants’ cognition or behaviour following debriefing is a pressing question. Within eyewitness memory research, a body of research has described the continued influence effect—the finding that misinformation that is presented to participants and subsequently withdrawn still colours or distorts their memories of the event (see [37] for a review). A similar observation has been made with respect to fake news or other forms of online misinformation and disinformation; researchers sometimes describe the information as “sticky” and difficult to eradicate. [38, 39]. In order for debriefing procedures to be effective at reducing belief and memory for misinformation, the debriefing must specifically debunk the misinformation provided; a general debrief is typically insufficient [40, 41]. Nevertheless, it was recently observed that less than a quarter of all misinformation papers published in the last six years reported providing a specific debriefing at the end of their experimental procedure [42]. In this context, it is important to consider the effects of debriefing on false memory as well as false belief, both in order to comply with our ethical obligation to leave participants as we found them [38], and because the presence of a memory may enhance subsequent attitudinal or behavioural change [15, 22]. It is, for example, possible that participants who form such false memories will experience persistent effects on behaviour that are resistant to debriefing. One potential reason for the persistence or “stickiness” of misinformation is the so-called “sleeper effect”, whereby misinformation may be reported at higher rates following a delay, even if it has previously been debunked [43, 44]. This research suggests that a core memory of the original misinformation remains, while accompanying warnings, debunkings, or messages regarding source credibility fade away. As a result, misinformation that was initially accompanied by a warning or subsequently retracted might not be accepted by participants at initial testing, but may come to be believed or remembered over time. In the context of the COVID pandemic, and indeed other health-related topics, it is therefore critical to ascertain the long-term effects of misinformation exposure, and establish whether debunked misinformation continues to be believed, remembered, or acted upon. Murphy et al. [45] recently reported a six-month follow-up of participants in a fake news study who were provided with a specific debrief at the end of the original study. Returning participants were less likely to report a false memory for a story they had previously been exposed to than new participants, who had not taken part in the original study, and were also less likely to form a false memory for a novel fake story. This provided strong support for the suggestion that debriefing is effective in reducing false memory for the specific misinformation provided, and may have a protective effect against future misinformation. The interval between debriefing and follow-up in that study was rather long, however. In the absence of reminders or post-event information, memories tend to decay over time [46]. Thus, the effects of misinformation may have simply faded over the course of a six-month period, but continued to have an influence for some time after debriefing. Indeed, in the context of a constantly shifting information landscape, such as that accompanying the COVID-19 pandemic, it may be more appropriate to focus on potential effects over a shorter timescale. For example, a researcher may have a valid concern that exposing a participant to misinformation about vaccination might affect their decision to get vaccinated in the following days or weeks. It remains to be seen whether debriefing is effective in reducing misinformation acceptance in the shorter term.

Memory vs. belief When evaluating the impact of misinformation and the effectiveness of debriefing, it is important to consider the distinction between false memory and false belief. It has previously been suggested that many reports of false memory in the literature may in fact reflect false belief—instances where the participant believes that the event in question took place, but does not have a clear memory of it [47, 48]. Recent evidence has suggested that memory and belief may have discriminable effects on subsequent behavioural intentions; for example, participants who were given a false suggestion that they had previously become ill after eating a certain food are more likely to change their behaviour if they believe the false information than if they simply remember it [16, 49]–the difference being recalling a memory of the event and actually believing that it truly took place. This can be ameliorated during data collection by explicitly distinguishing between memories and beliefs, for example by asking participants to indicate whether they clearly remember seeing or hearing about the event, or simply believe that it happened (e.g. [11, 12, 50]). Similarly, following a debriefing, it is important to distinguish whether participants still believe in or remember encountering the debunked information. It is not uncommon for people to retain a memory of an event even after they come to believe it never happened; for example, many people remember seeing Santa Claus coming down the chimney as a child, but as an adult no longer believe that to be a veridical experience. These “nonbelieved memories” [51, 52] may be expected to have less impact on our future behaviour; for example, you are unlikely to leave cookies out for Santa on Christmas Eve if you don’t believe he really exists, regardless of your childhood memories. Similarly, participants may retain the memory of having previously encountered the events described in a fake news story, but subsequently come to understand that the events never took place and should not affect their decision making. Of note however, recent work by Burnell and colleagues [53] suggests that retracted memories can still serve both helpful and harmful functions for individuals—for example, by influencing thinking or social cohesion.

[END]
---
[1] Url: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0280295

Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/