(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .



ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi [1]

['William T. Gattrell', 'Bristol Myers Squibb', 'Uxbridge', 'United Kingdom', 'Patricia Logullo', 'Centre For Statistics In Medicine', 'University Of Oxford', 'Equator Network Uk Centre', 'Oxford', 'Esther J. Van Zuuren']

Date: 2024-01

The ACCORD checklist is the first reporting guideline applicable to all consensus-based studies. It will support authors in writing accurate, detailed manuscripts, thereby improving the completeness and transparency of reporting and providing readers with clarity regarding the methods used to reach agreement. Furthermore, the checklist will make the rigor of the consensus methods used to guide the recommendations clear for readers. Reporting consensus studies with greater clarity and transparency may enhance trust in the recommendations made by consensus panels.

We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines: a systematic review was followed by a Delphi process and meetings to finalize the ACCORD checklist. The preliminary checklist was drawn from the systematic review of existing literature on the quality of reporting of consensus methods and suggestions from the Steering Committee. A Delphi panel (n = 72) was recruited with representation from 6 continents and a broad range of experience, including clinical, research, policy, and patient perspectives. The 3 rounds of the Delphi process were completed by 58, 54, and 51 panelists. The preliminary checklist of 56 items was refined to a final checklist of 35 items relating to the article title (n = 1), introduction (n = 3), methods (n = 21), results (n = 5), discussion (n = 2), and other information (n = 3).

In biomedical research, it is often desirable to seek consensus among individuals who have differing perspectives and experience. This is important when evidence is emerging, inconsistent, limited, or absent. Even when research evidence is abundant, clinical recommendations, policy decisions, and priority-setting may still require agreement from multiple, sometimes ideologically opposed parties. Despite their prominence and influence on key decisions, consensus methods are often poorly reported. Our aim was to develop the first reporting guideline dedicated to and applicable to all consensus methods used in biomedical research regardless of the objective of the consensus process, called ACCORD (ACcurate COnsensus Reporting Document).

Competing interests: PL is a member of the UK EQUATOR Centre, based in the University of Oxford; EQUATOR promotes the use of reporting guidelines, many of which are developed using consensus methods, and she is personally involved in the development of other reporting guidelines. WTG is an employee of Bristol Myers Squibb. KG is an employee and shareholder of AbbVie. APH, in the past 5 years, has worked with Reckitt Benckiser for the development of the definitions and management of gastro-oesophageal reflux disease. CCW is an employee, Director, and shareholder of Oxford PharmaGenesis Ltd, a Director of Oxford Health Policy Forum CIC, a Trustee of the Friends of the National Library of Medicine, and an Associate Fellow of Green Templeton College, University of Oxford. NH is an employee of OPEN Health Communications. ELH is an employee of Camino Communications. DT is co–editor-in-chief of the Journal of Clinical Epidemiology and chairs the Scientific Advisory Committee for the Centre for Biomedical Transparency. AP, PB and EJvZ report no conflicts of interest. At the outset of the work, NH was an employee of Ogilvy Health UK, WTG was an employee of Ipsen, and ELH was an employee of OPEN Health Communications at the time of manuscript development.

Funding: The project did not have receive any direct funding. The employers of the Steering Committee members agreed to contribute their employees’ time to the project. The Steering Committee members’ employers had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Copyright: © 2024 Gattrell et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Therefore, a comprehensive guideline is needed to report the numerous methods available to assess and/or guide consensus in medical research. The ACcurate COnsensus Reporting Document (ACCORD) reporting guideline project was initiated to fulfill this need. We followed EQUATOR Network–recommended best practices for reporting guideline development, which included a systematic review and consensus exercise. Our aim was to develop a new tool, applicable worldwide, that will facilitate the rigorous and transparent reporting of all types of consensus methods across the spectrum of health research [ 29 ]. A comprehensive reporting guideline will enable readers to understand the consensus methods used to develop recommendations and therefore has the potential to positively impact patient outcomes.

Reporting guidelines can enhance the reporting quality of research [ 20 – 22 ], and the absence of a universal reporting guideline for studies using consensus methods may contribute to their well-documented suboptimal reporting quality [ 5 , 19 , 23 – 25 ]. A systematic review found that the quality of reporting of consensus methods in health research was deficient [ 19 ], and a methodological review found that articles that provided guidance on reporting Delphi methods vary widely in their criteria and level of detail [ 25 ]. The Conducting and Reporting Delphi Studies (CREDES) guideline was designed to support the conduct and reporting of Delphi studies, with a focus on palliative care [ 26 ]. The 23-item AGREE-II instrument [ 27 ], which is widely used for reporting clinical practice guidelines, and COS-STAR for reporting core outcome set development [ 28 ], both contain a very limited number of items related to consensus.

Despite their critical role in healthcare and policy decision-making, consensus methods are often poorly reported [ 19 ]. Generic problems include inconsistency and lack of transparency in reporting, as well as more specific criticisms such as lack of detail regarding how participants or steering committee members were selected, missing panelist background information, no definition of consensus, missing response rates after each consensus round, no description of level of anonymity or how anonymity was maintained, and a lack of clarity over what feedback was provided between rounds [ 19 ].

Consensus obtained from a group of experts using formal methods is recognized as being more reliable than individual opinions and experiences [ 16 – 18 ]. Consensus methods help to overcome the challenges of gathering opinions from a group, such as discussions being dominated by a small number of individuals, peer pressure to conform to a particular opinion, or the risk of group biases affecting overall decision-making [ 4 ].

Consensus methods are widely applied in healthcare ( Table 2 ). However, the specific method has the potential to affect the result of a consensus exercise and shape the recommendations generated. In addition, the expertise needed to contribute to the consensus process will vary depending on the research subject, and a range of participants may be required, including, but not limited to, clinical guideline developers, clinical researchers, healthcare professionals, epidemiologists, ethicists, funders, journal editors, laboratory specialists, medical publication professionals, meta-researchers, methodologists, pathologists, patients and carers/families, pharmaceutical companies, public health specialists, policymakers, politicians, research scientists, surgeons, systematic reviewers, and technicians.

Evidence-based medicine relies on (1) the best available evidence; (2) patients’ values, preferences, and knowledge; and (3) healthcare professionals’ experience and expertise [ 1 , 2 ]. When healthcare professionals need to make clinical decisions, or when recommendations or guidance are needed and there is uncertainty on the best course of action, such as when evidence is emergent, inconsistent, limited, or absent—not least in rapidly evolving fields such as pandemics [ 3 ]—the collation and dissemination of knowledge, experience, and expertise becomes critical. Coordinating this process may be best achieved through the use of formal consensus methods [ 4 ] such as those described in Table 1 .

Methods

Scope of ACCORD ACCORD is a meta-research project to develop a reporting guideline for consensus methods used in health-related activities or research (Table 2) [29]. The guideline was designed to be applicable to simple and less structured methods (such as consensus meetings), more systematic methods (such as nominal group technique or Delphi), or any combination of methods utilized to achieve consensus. Therefore, the ACCORD checklist should be applicable to work involving any consensus methods. In addition, although ACCORD has been structured to help reporting in a scientific manuscript (with the traditional article sections such as introduction, methods, results, and discussion), the checklist items can assist authors in writing other types of text describing consensus activities. ACCORD is a reporting guideline that provides a checklist of items that we recommend are included in any scientific publication in healthcare reporting the results of a consensus exercise. However, it is not a methodological guideline. It is not intended to provide guidance on how researchers and specialists should design their consensus activities, and it makes no judgment on which method is most appropriate in a particular context. Furthermore, ACCORD is not intended to be used for reporting research in fields outside health, such as social sciences, economics, or marketing.

Study design, setting, and ethics The ACCORD project was registered prospectively on January 20, 2022 on the Open Science Framework [30] and the EQUATOR Network website [31], and received ethics approval from the Central University Research Ethics Committee at the University of Oxford (reference number: R81767/RE001). The ACCORD protocol has been previously published [29] and followed the EQUATOR Network recommendations for developing a reporting guideline [32,33], starting with a systematic review of the literature [19], followed by a modified Delphi process. In a planned change to the Delphi method as originally formulated, the preliminary list for voting was based on the findings of this systematic review rather than initial ideas or statements from the ACCORD Delphi panel, although the panel could suggest items during the first round of voting. In addition, the ACCORD Steering Committee made final decisions on item inclusion and refined the checklist wording as described below.

ACCORD Steering Committee WTG and NH founded the ACCORD project, seeking endorsement from the International Society of Medical Publication Professionals (ISMPP) in April 2021. ISMPP provided practical support and guidance on the overall process at project outset but was not involved in checklist development. The ACCORD Steering Committee, established over the following months, was multidisciplinary in nature and comprised researchers from different countries and settings. Steering Committee recruitment was iterative, with new members invited as needs were identified by the founders and existing committee, to ensure inclusion of the desired range of expertise or experience. Potential members were identified via ISMPP, literature research, professional connections, and network recommendations. When the protocol was submitted for publication, the Steering Committee had 11 members (WTG, PL, EJvZ, AP, CCW, DT, KG, APH, NH, and Robert Matheis [RM] from ISMPP). Bernd Arents joined the Steering Committee in July 2021 but left in December of that year, as did RM in August 2022, both citing an excess of commitments as their reason for stepping down. Patient partners were invited as Delphi panelists. Paul Blazey joined the Steering Committee in September 2022 as a methodologist to support the execution of the ACCORD Delphi process and provide additional expertise on consensus methods. The final Steering Committee responsible for the Delphi process and development of the checklist had members working in 4 different countries: Canada, United Kingdom, United States of America, and the Netherlands. A wide range of professional roles was represented by the Steering Committee with several members bringing experience from more than one area including clinician practitioners (medical doctor, physical therapist), methodologists (consensus methodologist, research methodologist, expert in evidence synthesis), medical publication professionals (including those working in the pharmaceutical industry), journal editors, a representative of the EQUATOR Network, and a representative of the public (S1 Text).

Protocol development The ACCORD protocol was developed by the Steering Committee before the literature searches or Delphi rounds were commenced and has been published previously [29]. An overview of the methods used, together with some amendments made to the protocol during the development of ACCORD in response to new insights, is provided below.

Systematic review and development of preliminary checklist A subgroup of the Steering Committee conducted a systematic review with the dual purpose of identifying existing evidence on the quality of reporting of consensus methods and generating the preliminary draft checklist of items that should be reported [19]. The systematic review has been published [19] and identified 18 studies that addressed the quality of reporting of consensus methods, with 14 studies focused on Delphi only and 4 studies including Delphi and other methods [19]. A list of deficiencies in consensus reporting was compiled based on the findings of the systematic review. Items in the preliminary checklist were subsequently derived from the systematic review both from the data extraction list (n = 30) [19] and from other information that was relevant for reporting consensus methods (n = 26) [19]. Next, the Steering Committee voted on whether the preliminary checklist items (n = 56) should be included in the Delphi via 2 anonymous online surveys conducted using Microsoft Forms (See S2 Text). There were 5 voting options: “Strongly disagree,” “Disagree,” “Agree,” “Strongly agree,” and “Abstain/Unable to answer.” NH processed the results in Excel, and WTG provided feedback and therefore neither voted. Items that received sufficient support (i.e., >80% of respondents voted “Agree”/“Strongly agree”) were included in the Delphi, while the rest were discussed by the Steering Committee for potential inclusion or removal. During the first survey, Steering Committee members could propose additional items based on their knowledge and expertise. These new items were voted on in the second Steering Committee survey. Upon completion of this process, the Steering Committee approved and updated the preliminary draft checklist, which was then prepared for voting on by the Delphi panel. Items were clustered or separated as necessary for clarity.

Delphi panel composition Using an anonymous survey (June 9–13, 2022), the Steering Committee voted on the desired profile of Delphi panelists for the ACCORD project. There was unanimous agreement that geographic representation was important, and the aim was to recruit from all continents (thereby covering both Northern and Southern hemispheres) and include participants from low-, middle-, and high-income countries to account for potential differences in cultural and ideological ways of reaching agreement. The aim was to include a broad range of participants: clinicians, researchers experienced in the use of consensus methods and in clinical practice guideline development, patient advocates, journal editors, publication professionals and publishers, regulatory specialists, public health policymakers, and pharmaceutical company representatives. As described in the ACCORD protocol [29], there are no generally agreed standards for the panel size in Delphi studies, although panels of 20 to 30 are common. The target panel size (approximately 40 panelists) was therefore guided by the desired representation and to ensure an acceptable number of responses (20, assuming a participation rate of 50%) in the event of withdrawals or partial completion of review.

Delphi panel recruitment Potential participants for the Delphi panel were identified in several ways: from the author lists of publications included in the systematic review, from invitations circulated via an EQUATOR Network newsletter (October 2021) [34] and at the European Meeting of ISMPP in January 2022, and by contacting groups potentially impacted by ACCORD (e.g., the UK National Institute for Health and Care Excellence [NICE]). Individuals were also invited to take part through the ACCORD protocol publication [29], and the members of the Steering Committee contacted individuals in their networks to fill gaps in geographical or professional representation. To minimize potential bias, none of the Steering Committee participated in the Delphi panel. Invitations were issued to candidate panelists who satisfied the inclusion criteria. While participants were not generally asked to suggest other panel members, in some cases, invitees proposed a colleague to replace them on the panel. Only the Steering Committee members responsible for administering the Delphi had access to the full list of ACCORD Delphi panel members. Panelists were invited by email, and reminder emails were sent to those who did not respond. Out of the 133 panelists invited, 72 agreed to participate. No panelists or Steering Committee members were reimbursed or remunerated for taking part in the ACCORD project.

Planned Delphi process The Delphi method was chosen to validate the checklist, in line with recommendations for developing reporting guidelines [32]. A 3-round Delphi was planned to allow for iteration, with the option to include additional rounds if necessary. Panelists who agreed to take part received an information pack containing an introductory letter, a plain language summary, an informed consent statement, links to the published protocol and systematic review, and the items excluded by the Steering Committee (see S3 Text). Survey materials were developed by PL and PB in English and piloted by WTG and NH. Editorial and formatting changes were made following the pilot stage to optimize the ease of use of the survey. In an amendment to the protocol, the order of candidate items was not randomized within each manuscript section. The Jisc Online Survey platform (Jisc Services, Bristol, United Kingdom) was used to administer all Delphi surveys, ensuring anonymity through automatic coding of participants. Panelists were sent reminders to complete the survey via the survey platform, and one email reminder was sent to panelists the day before the deadline for each round. The Delphi voting was modified to offer 5 voting options: “Strongly disagree,” “Disagree,” “Neither agree nor disagree,” “Agree,” and “Strongly agree.” Votes of “Neither agree nor disagree” were included in the denominator. The consensus threshold was defined a priori as ≥80% of a minimum of 20 respondents voting “Agree” or “Strongly agree.” Reaching the consensus threshold was not a stopping criterion. For inclusion in the final checklist, each item was required to achieve the consensus criteria following at least 2 rounds of voting. This ensured that all items had the opportunity for iteration between rounds (a central tenet of the Delphi method) [6] and enabled panelists to reconsider their voting position in light of feedback from the previous round. In Round 1, panelists had the opportunity, anonymously, to suggest new items to be voted on in subsequent rounds. Panelists were also able to provide anonymous free-text comments in each round to add rationale for their chosen vote or suggest alterations to the item text. After each voting round, the comments were evaluated and integrated by WTG, PL, PB, and NH and validated by the Steering Committee. If necessary, semantic changes were made to items to improve clarity and concision. Feedback given to participants between rounds included the anonymized total votes and the percentage in each category (see example in S4 Text) to allow panelists to assess their position in comparison with the rest of the group, as well as the relevant free-text comments on each item. Items that did not achieve consensus in Rounds 1 and 2 were revised or excluded based on the feedback received from the panelists. Items that were materially altered (to change their original meaning) were considered a new item. All wording changes were recorded. Panelists received a table highlighting wording changes as part of the feedback process so that they could see modifications to checklist items (for example feedback documents, see S5 Text). Items reaching consensus over 2 rounds were removed from the Delphi for inclusion in the checklist. Items achieving agreement in Round 1, which then fell into disagreement in Round 2 were considered to have “unstable” agreement. These unstable items were revised based on qualitative feedback from the panel and were included for revoting in Round 3.

Steering Committee checklist finalization process Consistent with the protocol [29], following completion of the Delphi process, the Steering Committee was convened for a series of three 2-hour virtual workshops (March 7, 14, and 16, 2023) to make decisions and finalise the checklist. For each item, WTG, PL, PB, and NH presented a summary of voting, comments received, and a recommended approach. The possible recommended approaches are shown in S6 Text. All recommendations (for example, to keep approved items, confirm exclusion of rejected items, etc.) were followed by an explanation of why WTG, PL, PB, or NH felt this would be the most appropriate action and a discussion between Steering Committee members in which the suggested action could be challenged and changed. Grammatical changes were also considered at this stage but only where they did not change the meaning of an approved item. Following review of all items, the order of the checklist items was evaluated by WTG, PL, PB, and NH.

[END]
---
[1] Url: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1004326

Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/