(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .
Development of a novel methodology for ascertaining scientific opinion and extent of agreement [1]
['Peter Vickers', 'Department Of Philosophy', 'University Of Durham', 'Durham', 'United Kingdom', 'Ludovica Adamo', 'School Of Philosophy', 'Religion', 'History Of Science', 'University Of Leeds']
Date: 2024-12
Abstract We take up the challenge of developing an international network with capacity to survey the world’s scientists on an ongoing basis, providing rich datasets regarding the opinions of scientists and scientific sub-communities, both at a time and also over time. The novel methodology employed sees local coordinators, at each institution in the network, sending survey invitation emails internally to scientists at their home institution. The emails link to a ‘10 second survey’, where the participant is presented with a single statement to consider, and a standard five-point Likert scale. In June 2023, a group of 30 philosophers and social scientists invited 20,085 scientists across 30 institutions in 12 countries to participate, gathering 6,807 responses to the statement Science has put it beyond reasonable doubt that COVID-19 is caused by a virus. The study demonstrates that it is possible to establish a global network to quickly ascertain scientific opinion on a large international scale, with high response rate, low opt-out rate, and in a way that allows for significant (perhaps indefinite) repeatability. Measuring scientific opinion in this new way would be a valuable complement to currently available approaches, potentially informing policy decisions and public understanding across diverse fields.
Citation: Vickers P, Adamo L, Alfano M, Clark C, Cresto E, Cui H, et al. (2024) Development of a novel methodology for ascertaining scientific opinion and extent of agreement. PLoS ONE 19(12): e0313541.
https://doi.org/10.1371/journal.pone.0313541 Editor: Naeem Mubarak, Lahore Medical and Dental College, PAKISTAN Received: June 3, 2024; Accepted: October 27, 2024; Published: December 6, 2024 Copyright: © 2024 Vickers et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: All relevant data are in the file 'Data' submitted as Supplementary Information. For the purposes of this R&R, we have also put them on OSF:
https://osf.io/r4sy2/. Funding: The author(s) received no specific funding for this work. Competing interests: The authors have declared that no competing interests exist.
Introduction Attempts to measure scientific opinion are not uncommon [1–5]. In some cases, we know in advance that a scientific consensus (probably) exists, but we also know that many people are unaware of the consensus, or even believe that relevant experts are significantly divided when they are not [6–8]. In some cases there is value in assessing the strength of agreement, or consensus, on a specific topic. In other cases, there may be value in revealing variations in scientific opinion across geographical regions, across fields of expertise, or across time. Survey methods are a valuable tool, but as things stand very limited means for scientific agreement assessments are available. The survey-based approaches found in the literature have one or more of the following drawbacks: (i) low response rate; (ii) poor international representation; (iii) slow turnaround time; (iv) little scale-up potential; (v) they are one-off studies, not easily allowing for follow-up(s). We here present a survey-based approach to assessment of scientific community opinion designed to achieve a high response rate, include a broad and diverse set of scientists, allow for rapid implementation, be amenable to significant scale-up, and with the potential to be repeatable indefinitely. The motivation for developing a new survey methodology with these virtues certainly isn’t to replace other methodologies, of which there are many. For example, there is still plenty of space for Delphi methods [9, 10], literature analysis techniques [1], and more targeted surveys of a relatively small number of specialists [2, 5]. Each approach has its advantages and disadvantages, and the choice of methodology will depend on precisely what one hopes to show. For example, if one wishes to ascertain scientific opinion regarding a specific and carefully constructed statement, a literature-based approach may perform poorly: that exact statement may not even exist in the literature. At the same time, one may wish to know what scientists think right now, not across literature spanning several years. In addition, there are options for triangulation: in some cases, it may be possible to demonstrate a strong consensus on a certain topic, no matter what method one employs. In other cases, it may be revealing to see that a strong consensus exists according to most scientific opinion assessment methods, but not all. We argue that our newly developed method constitutes a valuable complement to currently available methods for ascertaining the degree of scientific agreement both across scientific fields and within particular subcommunities of scientists (as appropriate). The method can be applied to numerous scientific issues, potentially informing policy decisions and public understanding across diverse fields, and on a global scale. In particular, we highlight recent international expansion of the network, to incorporate c.50,000 scientists across 80 institutions. The method can tease out potential differences of opinion among scientists located in different regions, thereby allowing for sensitivity to local conditions that might be especially relevant to policy issues. For the sake of this proof-of-concept study we chose the following statement, (S): (S): Science has put it beyond reasonable doubt that COVID-19 is caused by a virus. Many other statements could have been chosen. For present purposes we merely wish to set a baseline regarding what scientific consensus looks like on our particular methodology. Whilst the survey results are not uninteresting, our priority is to develop a novel survey method with important virtues. The key methodological innovations are: (i) Each scientist receives a personal, one-to-one survey invitation email from somebody inside their own institution. (ii) The survey invitation email, and the survey itself, are both maximally concise. In June 2023, a group of 30 philosophers and social scientists sent survey invitation emails to 20,085 scientists across 30 institutions in 12 countries, gathering 6,807 Likert scale responses. In nearly all cases, survey invitation emails were sent internally, by the local coordinator (or ‘spoke rep’) at each of the 30 participating institutions, targeting academic scientists at that institution with their email addresses in the public domain. An example of the survey invitation email is presented in the next section, together with the survey platform itself. Details of the methodology, and its underlying justification, are also presented in the next section. Readers primarily interested in the survey results, and their interpretation, may wish to skip forward to the ‘Results and discussion’ section.
Method This study was pre-registered on 2nd September 2022 using the Open Science Framework, with the title ‘New method of measurement for strength of scientific consensus regarding a specific statement’; see
https://osf.io/dgq4k. However, we departed from the registered plan in certain ways: (i) Research software engineers at Durham University, UK, created a bespoke surveying platform (Fig 1). This was a vast improvement on the original vision, as described in the pre-registration. (ii) We scaled up to c.20k scientists at 30 institutions. This was done to improve the scale of the project, making it a more effective test of the methodology, but also because scaling up was relatively easy with the bespoke surveying platform in place. (iii) We chose a different survey statement. This was done on the grounds that a more serious survey, concerning a more controversial and politically sensitive statement, should not be used until the methodology has been improved following a series of iterations. PPT PowerPoint slide
PNG larger image
TIFF original image Download: Fig 1. The ‘10 second survey’ platform used.
https://doi.org/10.1371/journal.pone.0313541.g001 Once the project concluded, we also used the Open Science Framework to create a permanent record of our data, code, and figures generated [11]. Making use of existing academic relationships, we established a network of 30 participating academic institutions. This included finding 30 local coordinators, or ‘spoke reps’, who would send out emails internally, at each institution, using a mail merge application. The majority of the spoke reps were philosophers of science, broadly construed, already known to each other through collaborations and conferences. A few were social scientists, or philosophers of science who collaborate with social scientists and are no strangers to empirical methods. Early on, we judged that the key thing for boosting response rate was that an academic internal to the scientist’s institution invited them to participate, regardless of the specialization of that academic. At two institutions (Leeds and UCI) we allowed doctoral students to send the survey invitation emails, as something of an experiment, to see if this had a significant impact on response rate. During the period 10th December 2022 through 15th April 2023, five individuals based at the University of Durham prepared spreadsheets of scientists’ names and email addresses for the 30 institutions, finding email addresses in the public domain. It was necessary that (i) scientists were in no way cherrypicked, and (ii) clear guidelines were in place for identifying ‘scientists’. No definition of ‘scientist’ was required; instead, uncontroversial examples of ‘scientists’ were identified using sufficiency criteria. Scientists needed to be clearly affiliated with a university department, institution, or centre, and have a PhD in a relevant field, or equivalent qualification (see below). Scientists working in industry were not included, since they don’t tend to have handy identifying criteria, or email addresses in the public domain, and for ethical reasons we could only send our ‘unsolicited’ survey invitation emails to scientists with email addresses in the public domain. Potential participants were grouped into five broad fields: Physics; Chemistry; Biology; Earth Sciences; Health Sciences. Most scientists belonged to a department which could easily be assigned to one of these broad fields–for example, membership of a physics, chemistry, or biology (or biological sciences) department. Those assigned to ‘earth sciences’ were found in centres, departments, or schools, with a wide range of names; these included the ‘School of Earth and Environment’ at the University of Leeds, the ‘Department of Earth System Science’ at UCI, the ‘School of Geography, Earth and Environmental Sciences’ at the University of Birmingham, the ‘Centre for Earth, Ocean and Atmospheric Sciences’ at the University of Hyderabad, the ‘Department of Earth, Atmospheric, and Planetary Sciences’ at Purdue University, and the ‘Department of Geology’ at the University of Pretoria. In nearly all cases it was straightforward to assign a scientist based in one of these centres, departments, or schools to the ‘earth sciences’ field. We opted not to include academics based in geography departments, even though there is no sharp distinction between geologists and physical geographers; natural scientists were our focus in this particular study, and many academics based in geography departments are best described as social scientists. Our intention was never to include all relevant individuals at a certain institution; rather, it was our intention to ensure that all those that were included met our criteria. The field of ‘Health Sciences’ was another complex field. Scientists placed in this category came from academic centres, institutes, departments, and schools with a wide range of names. These included the ‘Department of Public Health and Primary Care’ at the University of Cambridge, the ‘Sue and Bill Gross Stem Cell Research Centre’ at UCI, the ‘School of Population and Public Health’ at UBC, and the ‘School of Dentistry’ at the University of Leeds. At the University of Exeter, within the Faculty of Health and Life Sciences, we included the ‘Department of Clinical and Biomedical Sciences’, but not the ‘Department of Psychology’. This was a judgement call based on the fact that members of staff in psychology departments often are not working in ‘health sciences’, and have varied backgrounds, with some having PhDs in Philosophy, for example. To save the time of painstakingly examining the credentials of every member of a psychology department, we simply left them out of this study. As this project develops and progresses (see ‘Outlook’) such choices will certainly be revisited. Each potential participant’s broad field of expertise was encoded in their unique voting token (see Fig 2), so that results could be carved up accordingly and analysed. Some critics might prefer narrow fields of specialization (such as in [2] and [5]), on the grounds that one does better to survey ‘true experts’ as opposed to scientists merely working in the same broad field. However, the notion of ‘true expert’ has been used in the past to exclude important voices from a scientific conversation ([12], see especially Chapter 5). Moreover, it is controversial to suppose that the only worthwhile opinions are those directly informed by ‘first order’ scientific evidence; arguably, scientists use ‘higher order’ evidence all the time ([13], pp. 135–53). In addition, in some cases, focusing on highly relevant specialists will mean that one’s final number of participants is very small, and this can lead to problems such as self-selection effects, or echo-chamber effects. The larger the participant number, the more likely it is that one’s cohort will incorporate diverse perspectives, and the harder it is for a sceptic to mount an accusation of bias or cherry-picking. At the same time, we emphasise that our methodology is compatible with future adjustments where scientists are categorised more narrowly into sub-fields, e.g. ‘Astrophysics’. Consider, for example, the survey undertaken as part of the Leverhulme-funded project ‘Exploring Uncertainty and Risk in Contemporary Astrobiology’, or ‘EURiCA’ (a collaboration between philosophers and scientists based at the universities of Durham and Edinburgh in the UK). During February 2024, as part of this project, a decision was made to survey the astrobiology community (broadly construed), with non-astrobiologist scientists included merely for the sake of comparison with the target group. The same methodology was utilized, with the exception that the PI of the ‘EURiCA’ project (Vickers) sent all of the emails personally. We note that the appropriateness of asking experts more broadly, or more narrowly, will depend on how ‘hybridized’ the focus statement is, in the sense of Ballantyne’s ‘hybridized questions’ [14]. PPT PowerPoint slide
PNG larger image
TIFF original image Download: Fig 2. One example of the survey invitation email; adjustments across institutions were made, as needed, to account for local norms.
https://doi.org/10.1371/journal.pone.0313541.g002 In a few cases, an institution’s participants came from only some of the five fields. For example, the University of Exeter, in the UK, doesn’t have a Chemistry Department. A few professional chemists do work at Exeter, but we did not include them since doing so invited problems vis-a-vis anonymity of participants. If all scientists in a particular field (e.g. Chemistry) at a particular institution (e.g. Exeter) vote, and vote the same way, then it becomes possible to say how a particular scientist voted. We made this very unlikely, by ensuring that no fewer than ten scientists occupied any such field+institution category. If any field+institution category were going to be occupied by fewer than ten scientists, we excluded the category entirely. Thus, for example, Exeter participants came from ‘Physics’, ‘Biology’, ‘Earth Sciences’, and ‘Health Sciences’. Similarly, Lancaster lacked participants from ‘Biology’, NYCU Taiwan lacked participants from ‘Earth Sciences’, Stockholm lacked participants from ‘Health Sciences’, and Amsterdam lacked participants from both ‘Biology’ and ‘Health Sciences’. If, in the future, we were to use narrower specialization categories, e.g. ‘Astrophysics’ instead of ‘Physics’, this problem of ensuring anonymity would amplify. So too, acquiring additional data such as seniority of participant would threaten anonymity. It has been widely documented that de-anonymization is a serious threat [15–17]. Many of the targeted participants were easy to select, e.g. a Physicist working in a Physics Department, with a PhD in Physics. Occasionally an EngD compensated for not having a PhD. The ‘Health Sciences’ category was more challenging, and here we allowed for participants with MD, PsyD, DVM, or PharmD qualification. Sometimes the title ‘Honorary Professor’ helped to indicate that an individual was a respected scientist, even though they’d had an unusual career trajectory. In a small handful of cases an individual was included without the expected qualifications, but with every other possible indicator, e.g. a distinguished career and long list of relevant publications in respected scientific journals. To emphasise, only uncontroversial cases of ‘scientists’ were used; if there was any doubt regarding an individual’s academic status, they were not included. Some judgement calls had to be made; however, with very large numbers of scientists involved, and a small number of borderline cases, results remain robust across different possible judgement calls. In parallel, Research Software Engineers (RSEs) at Durham prepared bespoke surveying software and the survey platform (Fig 1). The completed spreadsheets could be entered into the platform to generate unique voting tokens for every potential participant, ensuring that only those targeted could vote, and each person could only vote once. The voting tokens also tagged votes with institution and discipline of voter, but no other information. Raw voting data thus took the form “Durham, Physicist, Strongly Agree”, “Hyderabad, Earth Scientist, Disagree”, etc. Whilst the survey was open, it was possible for those with access to the server (i.e. the software engineers) to see which targeted scientists had voted, since they could see which tokens had been used. When the survey closed (after four weeks), all tokens were deleted from the server, and it was then impossible for anyone to ascertain even if a given targeted scientist had voted. To be clear, no ‘field+institution’ category had a 100% response rate; if it had, we would be able to judge that a targeted participant in the category had voted. Following standard beta-testing with a Beta Testing Group of 50 scientists, initial invitations were sent out during June 2023, with a single reminder email sent out two weeks later, and the entire survey closed after four weeks. In just one case–UNAM–the reminder email wasn’t sent. In three other cases, due to technical or ethical challenges and last-minute complications, the local spoke rep was unable to send the survey invitation email, the reminder email, or both. In these cases, the project lead (Vickers) performed the mail merge externally, from a Durham University email account. It is unclear whether or not this had a serious effect on the resultant response rate, since several factors affect response rate. Not only was the survey itself maximally short (Fig 1), the survey invitation email was also maximally short (see Fig 2), while still including all necessary information. It was imperative that the email itself be maximally short, since this was the first thing a potential participant would see, and there was no point having a super-short survey if the scientist never clicked on the link to reach the survey. Sometimes a scientist will see a long email, and quickly judge that even just to read the email is too big an ask of their time. The bespoke surveying software and platform have already been published on Zenodo [18]. We conducted this pilot project on a very limited budget, and thus the survey software is limited in various respects. For example, the raw data for each institution had to be downloaded, pieced together, and processed, whereas in a future iteration a huge amount of data processing would happen automatically, including generation of an ‘initial report’. Similarly, opt-outs had to be processed manually. It would also be interesting to acquire data on the percentage of potential participants that did click the link in the survey invitation email, but did not vote, or only voted much later (following the reminder email). As discussed in the next section, participants voting in the second wave tended to disagree more than participants voting in the first wave. In future iterations of the method, we could test whether disagreement votes in the second wave correlate with opening the survey, but not voting, during the first wave. We next turn to sampling bias concerns. Sampling happens twice: once from the entire population, and again when scientists themselves choose whether or not to participate. In both cases, sampling bias could occur. Is this a concern? In the first case (kind 1), if (for example) all scientists within China were to disagree, then our survey results would not reflect international scientific community opinion; we’d get a biased result, and we wouldn’t know it, because we have no data from inside China. In the second case (kind 2), if (for example) there were a strong correlation between strongly disagreeing and not participating, then we’d get a biased result. And once again we wouldn’t know it, because we don’t have any data from those that didn’t participate. We respond as follows: (i) Our survey is one of the most balanced scientific community surveys that has ever been done, involving a greater number of scientists, across more countries. Ideally we’d include even more scientists across even more countries, and in fact there are concrete plans for this (see ‘Outlook’). An additional point is that the diversity is greater than it initially appears. For example, our set of targeted Oxford University scientists includes scientists from more than 30 different countries, when judged by country of academic formation (typically BSc, MSc, and doctoral study). We stress, however, that since votes are entirely anonymous, we cannot determine which of the targeted Oxford University scientists participated. (ii) We achieved a high response rate relative to other survey-based methodologies, and the higher the response rate the lower the worry about ‘kind 2’ sampling bias. Acquiring data on gender and age (see e.g. [2]) doesn’t help much with such bias, since the primary bias could easily come in elsewhere, such as with political ideology [19, 20]. Apart from these sampling bias concerns, it has been suggested that scientists in a certain region might be somehow biased, simply in virtue of being based in that region. This is plausible; Vickers ([12], Chapter 5, Section 2) discusses how, in the 1930s, 40s, and 50s, scientists in certain countries (including the US, UK, and Australia) were biased towards dismissing ‘continental drift’ as a hopeless idea, whilst other countries (such as South Africa) were more open-minded. This had to do with the contrasting scientific cultures in those countries. Crucially, the survey approach being proposed in the current paper would serve the purpose of revealing such differences, if they exist. Data on differences in scientific opinion across countries/regions is valuable data to have. If differences are revealed, one can then conduct an investigation to determine whether those differences are indeed due to problematic biases, or echo-chambers, or are simply reasonable differences of professional judgement. We see our role as the first step in this process–collecting the initial data, to reveal a possible bias, which can then be investigated directly. Finally a note on ethical approval. This project has been through a comprehensive ethical approval at Durham University, UK, equivalent to IRB. A full Data Protection Impact Assessment (DPIA) has been completed, including a Legitimate Interests Assessment (LIA). The assessment result for this project is ’low risk’. Detailed documentation of (i) ethical approval, (ii) DPIA, and (iii) LIA, is included as Supporting Information ‘Other’. We note here that the need for participants’ explicit prior consent was waived by the ethics committee. A full discussion is available in the Supporting Information ‘Other’. Just briefly we stress here that if participants did not consent, they could simply ignore or delete the survey invitation email. Clicking the survey link embedded in the email is equivalent to consenting for one’s opinion to be included, anonymously, as part of the survey. On the scale of many thousands of scientists it would be virtually impossible to acquire explicit consent from participants, without very heavily impacting the return rate, and thereby undermining the project. Thus another way must be found, and it is ethically sound to take a scientist’s decision to click the survey link in the invitation email as signalling their consent to their opinion being included in the survey. Scientists are able to permanently ‘opt out’ in two clicks.
Outlook During August-October 2023 we expanded the network of institutions considerably, from 30 to 80 (Fig 10). This larger network contains c.50,000 scientists. The primary motivations are: (i) The new network is more internationally representative: 24 institutions in Europe, 12 in North America, 6 in South America, 17 in Asia, 9 in Africa, and 12 in Australasia. This will help to ensure that inferences from the sample to the global population are reliable, and confidence interval calculations can be trusted. (ii) The new network allows for significant survey size even when quite specific questions are asked of the data, e.g. “Is there a strong consensus amongst earth scientists in Asia?” (iii) In a larger network, survey experiments can be conducted, where each sub-group survey within the experiment is large enough to count as a significant survey in its own right. PPT PowerPoint slide
PNG larger image
TIFF original image Download: Fig 10. Organizational chart. Small coloured dots represent 80 academic institutions, 30 of which already participated in the June-July 2023 project, and 50 of which have since agreed to join. Each institution has a local representative or ‘Spoke Rep’; initials of Spoke Reps are included in the figure.
https://doi.org/10.1371/journal.pone.0313541.g010 Regarding motivation (iii), we emphasise that a huge range of survey experiments become possible with the new network. One can make countless adjustments to the methodology, to ascertain how these adjustments affect important variables including agreement response and response rate. Of the many possible survey experiments worth considering, we mention just two here. First, it would be interesting to compare scientists’ personal views with their impression of the general consensus; in this way, we could explore the phenomenon or ‘pluralistic ignorance’ or ‘preference falsification’ [45]. Second, it would be interesting to split the population of scientists in two, running two surveys in parallel, but with one difference: inclusion or omission of a slider for the scientist to self-report their degree of expertise vis-à-vis the statement in question. If a slider can be added without harming the response rate and opt-out rate, it would provide valuable information, and might also allow us to include potentially relevant academics more freely. Moving forward, another thing we plan is substantial consultation with experts regarding the language, culture, and politics of participating countries and regions. Our experience with CONICET shows clearly the importance of such consultation, and its potential impact on survey results. Among other things, such experts would offer an insight regarding the extent to which we should expect scientists to respond according to their personal (and perhaps private) view, as opposed to the view they are expected to have (or think they are expected to have). If we have reason to believe that scientists in a certain country or region are largely not offering their personal view, we would carefully consider whether to include data from that region in our survey. We also comment here on the flexibility of the proposed methodology going forward. Flexibility along various dimensions is crucial, since the sphere of electronic communication is rapidly evolving. As things stand in 2024, the vast majority of scientists continue to use email on a daily basis, and email remains the primary method of communication amongst scientists. In addition, as things stand, most scientsts are comfortable with their email address being in the public domain. If one or both of these change in the near future, that needn’t destroy the proposed methodology; rather, the methodology would need to be adjusted accordingly. It will always remain the case that academics at an institution will have a convenient way of communicating with each other. Thus, a spoke rep in our network would still be able to send survey invitations to scientists at her/his institution. Precisely how this would work can only be a matter of speculation at this point in time, and our feeling is that emailing will remain feasible for many years to come. Finally we note that, were this endeavour to properly take off on a large, international scale, as proposed, it would need to be run by an international organisation. This is primarily due to the fact that otherwise concerns could arise regarding any one country dominating the questions asked, the procedure, or having early access to the resultant data.
Conclusion Policymakers, NGOs, laypersons, and others, often ask the question, “Is there a scientific consensus?”, regarding a particular subject matter. Assuming the subject matter is genuinely scientific, this seems like an important question to ask, and answer. If scientists are united in their opinion, that is considered significant, and can be a good basis for particular actions. If scientists are divided, that signals that we are dealing with an ‘open’ question, with reasonable opinions on either side, and (probably) various divergent ‘open’ courses of action. The study described here shows how it is possible to establish a global network to quickly ascertain scientific opinion regarding selected statements, on a large international scale, with high response rate, low opt-out rate, and in a way that incurs little survey fatigue, thus allowing for significant repeatability. Assuming periodic introduction of new spokes to the network, indefinite repeatability becomes realistic. At the same time, a binary yes/no approach to scientific consensus has obvious problems, and the very question “Is there a scientific consensus on this matter?” warrants scrutiny. There are various different methodologies for assessing scientific opinion, and the degree of ‘consensus’, if any, will vary with method. Even within a single-statement, five-point Likert scale, survey methodology, there are different options for deriving an agreement score from the data (recall Fig 9). We have suggested that one does better to ignore the idea of an agreement score, instead setting a baseline for a particular methodology and assessing future survey results comparatively, with respect to that baseline. This is a realistic approach on our particular methodology, because of the possibility of significant, or even indefinite, repeatability: it is crucial that one doesn’t exhaust one’s population of scientists in the course of setting the baseline. Before closing, it would be remiss not to mention optimistic scenarios. Notwithstanding backfire effects in some groups, in some contexts (e.g. [46, 47]), and doubts concerning the extent to which ‘consensus messaging’ influences people [48], it remains possible that scientific community opinion data, collected on a large international scale as envisioned here, could sway public opinion significantly, on a range of important topics (cf. [49]). This could then make viable implementation of policies (e.g. tackling climate change) that were previously too controversial for policymakers to take seriously. Consensus data will not be welcomed by all; far from it. But for the sake of those who welcome it, and wish to form opinions in light of it, relevant data should be made available.
Acknowledgments In the absence of any external funding for this project, we wish to sincerely thank the University of Durham, UK, for the seedcorn funding that enabled the project to go ahead. Neil Levy’s time is funded by the Templeton Foundation (grant #62631) and the Arts and Humanities Research Council (AH/W005077/1). We thank audiences at the 2023 conference of the British Society for the Philosophy of Science, the Department of Philosophy at the University of Birmingham (UK), the Department of Philosophy at the University of Vienna, the Department of Philosophy at the University of Helsinki, and the Department of STS at University College London, for feedback and suggestions. We would also like to thank Nick Allum, Stephen Lewandowsky, and Naomi Oreskes for helpful discussion. We also extend sincere thanks to two anonymous reviewers.
[END]
---
[1] Url:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0313541
Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/