(C) PLOS One [1]. This unaltered content originally appeared in journals.plosone.org.
Licensed under Creative Commons Attribution (CC BY) license.
url:https://journals.plos.org/plosone/s/licenses-and-copyright

------------



Causal reasoning without mechanism

['Selma Dündar-Coecke', 'Quantinuum', 'Qba', 'Centre For Educational Neuroscience', 'London', 'United Kingdom', 'Gideon Goldin', 'Department Of Cognitive', 'Linguistic', 'Psychological Sciences']

Date: 2022-05

Unobservable mechanisms that tie causes to their effects generate observable events. How can one make inferences about hidden causal structures? This paper introduces the domain-matching heuristic to explain how humans perform causal reasoning when lacking mechanistic knowledge. We posit that people reduce the otherwise vast space of possible causal relations by focusing only on the likeliest ones. When thinking about a cause, people tend to think about possible effects that participate in the same domain, and vice versa. To explore the specific domains that people use, we asked people to cluster artifacts. The analyses revealed three commonly employed mechanism domains: the mechanical, chemical, and electromagnetic. Using these domains, we tested the domain-matching heuristic by testing adults’ and children’s causal attribution, prediction, judgment, and subjective understanding. We found that people’s responses conform with domain-matching. These results provide evidence for a heuristic that explains how people engage in causal reasoning without directly appealing to mechanistic or probabilistic knowledge.

Funding: Steven Sloman was supported by a grant from the Thrive Center for Human Development. In addition, this publication was made possible through a grant from the Varieties of Understanding Project at Fordham University and the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the Varieties of Understanding Project, Fordham University, or the John Templeton Foundation.

Copyright: © 2022 Dündar-Coecke et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

1. Introduction

Even experienced cyclists cannot reliably draw a picture of the mechanism that makes a bicycle work [1]. Moreover, few people can explain how a ballpoint pen works; in fact, when they try, they discover they do not understand such artifacts as well as they thought they did [2]. Most of us live with the illusion that we have more causal knowledge than in fact we do [3].

It is unrealistic to expect people to remember all that we learn about how things work. Things have many layers of complexity, both because they have parts that themselves can be decomposed at multiple levels, and because they interact with so many other things (understanding ballpoint pens requires understanding writing). So how do people make inferences about what causes what if things are too complex for them to understand? A number of proposals have been offered. Einhorn and Hogarth [4] suggested that people rely on several "cues-to-causality," including covariation, temporal order, contiguity in time and space, and similarity of cause and effect. Evidence for some of these cues has accumulated (see e.g., Lagnado & Sloman [5] for temporal order and contiguity; LeBoeuf & Norton, [6], for similarity; Johnson & Keil [7] and Rottman & Hastie [8], for more recent reviews). However, such cues only provide pairwise information about variables. Causal mechanisms generally involve sets of variables working together in a highly structured way (like the pedals, gears, chain, wheels, and frame of a bicycle) and pairwise relations are not sufficient. Johnson and Keil propose a heuristic for capturing a certain kind of structure—the hierarchical structure of events—by positing a level-matching principle.

Our proposal appeals to similarity indirectly and offers a heuristic for making causal inferences that respects structural relations, not merely cause-effect pairs. Our proposal assumes that retaining abstraction information from a small set of categories of mechanism is cognitively feasible. Whenever we lack specific knowledge about a process, we can note the broader type of that process to bootstrap causal inference. We refer to such types as “domains.” Our claim is not that mechanisms come in a fixed hierarchy of types, but that any appropriate abstraction process induces a category and therefore a domain. Thus, causal domains tend to reflect regularities in the world; and we are in fact encouraged to consider only the likeliest causal relations, effectively reducing an otherwise overwhelming search space.

We propose that humans tend to make inferences and ascribe causal structures based on the domain of the corresponding mechanism. Categorical domains help to identify types of entities [9, 10], but mechanism domains enable us to identify the kinds of parts and processes that are causally related and operate in similar ways. For instance, the knowledge that the mechanism for making calls on a cell phone is electromagnetic is enough to guide a swath of inductive inferences. The belief that a cellphone uses an electromagnetic mechanism suggests that its performance depends on whether the phone case is made of metal, but not its color. In most cases, although a detailed understanding of how a cell phone works is not necessary to use it, mechanism domain knowledge helps to identify the type and scope of relevant information. It makes many inferences feasible and economical.

We propose the domain-matching heuristic, hypothesizing that we are likelier to assume two events are causally related if they share the same mechanism domain. When we observe a cause that participates in the mechanical domain, we are more likely to infer a corresponding effect that also participates in the mechanical domain. If we observe an effect in the chemical domain, we will look to possible causes that also participate in the chemical domain. Our goal in this paper is to test this proposal and explore whether people’s causal attributions fit mechanism domains when they link potential causes and effects. We also conjecture that “cross-domain” mechanisms might be relatively rare, rendering the domain-matching heuristic a useful guide most of the time.

1.1 Causal attribution in the absence of mechanism knowledge Causal processes can be reduced to mechanisms, sequences of interconnected events that involve parts and processes [11, 12]. In psychology, the Piagetian framework recognized this long ago, proposing a mechanistic view for causal systems, where mechanism refers to, for example, how a bicycle works. Johnson and Ahn [13] define mechanisms as systems of visible and invisible characteristics interacting systematically, where the same effects are produced by the same causes. Park and Sloman [14] define them as a set of causes, enablers, disablers, and preventers that are involved in producing an effect, unfolding over time. Lombrozo [15; see also 16] highlights that mechanisms have a privileged relationship to explanations; they do not simply identify causes but illustrate how the cause brought on the effect. Exposure to mechanisms is inevitable. Within months of taking our first steps, we start establishing some appreciation of how the world works without developing a deep understanding of operating mechanisms. Children learn to keep their ice-cream in the fridge on a sunny day. Cooks will use an oven to bake a cake. How exactly do people think they understand phenomena without knowing the details of causal interactions between functional parts? We argue that mechanism domains offer a heuristic for a variety of causal reasoning tasks so that even though most of us do not know how a fridge or an oven work, we rely on our beliefs that are rooted in knowledge of mechanism domains. It is this ability that helps us to organize knowledge into content domains like biology or chemistry. Findings showing young children’s ability to distinguish animals from artifacts supports this view in the sense that children demonstrate distinct explanatory understandings of, for instance, biological (animals) and mechanical causal agents (machines or blocks). They seem to believe there are distinct mechanisms driving the causal relations in different domains [17], with multiple causal-explanatory construals for physical, biological, psychological, and chemical events. For instance, when asked, a preschooler could state that hammers break things, whereas water makes things wet, possibly by associating causes with effects in a domain-specific manner [18–21]. Comparing young children’s and adults’ responses over a series of five experiments, Shultz’s [22] study also showed that, for instance, in the physical domain, familiarity with the objects in a question is not a strong indicator of mechanism level thinking. In one of the experiments, where children and adults were presented with sound, wind, and light transmissions in different procedures, participants’ tendency to analyze causal mechanisms was not restricted to prior knowledge. Even young children knew that a spot of light was likelier to be due to a flashlight than a fan, and furthermore, their justifications more often relied on knowledge about the nature of the transfer (e.g., light spots are round) than past experience (e.g., it looks like my flashlight). The relational complexities inherent in most causal mechanisms seem to drive people to develop beliefs about certain relational patterns, with some explanatory frameworks allowing them to make sense of properties and causal relations. Intuitive and lay theories conjecture that these beliefs are intrinsically limited, incomplete, or partial models for how things work [23, 24]. Although the majority of everyday explanations invoke cause-effect relations, most without requiring domain expertise (the wind blew the fence down), people often seem to determine appropriate relationships through a mental process producing subjective beliefs about reality. Consider someone who believes they will get sick if they fail to use soap. According to Ahn and Kalish [9], this thought implies a belief in a mechanism, whether it involve viruses, miasma, or something else. Walsh and Sloman [25] highlight that most of the evidence on reasoning about causal relations supports mechanism-based theories (see also [26]). Most of us do not know how soap works, but we act by relying on some beliefs about an underlying process. Ahn and Kalish explain this as “people’s beliefs about causal relations include… [among other things]… a set of more or less elaborated beliefs about the nature of that mechanism, described in theoretical terms.” (p. 5). In our view, these beliefs are rooted in knowledge of mechanism domains. Across a series of experiments, Rozenblit and Keil [2] asked people for self-understanding ratings for how a number of devices worked. People’s ratings were lower after trying to explain how the device worked, after seeing an expert explanation, or after they were asked a key comprehension question about the system suggesting that people think they know how things work better than they actually do [see also [3]). What allows people to address causal questions when their causal knowledge is so impoverished? We propose that a mechanism-domain-matching heuristic is one of the strategies humans employ in new situations to close the gap between, on the one hand, causal explanation and prediction, and on the other hand, prior knowledge and understanding. Understanding, explaining, and predicting are intimately related but also distinct competences [27], the differences between them giving cues as to how people can predict and explain a causal phenomenon without fully understanding it. With respect to the categorization of knowledge, this kind of representation would constitute what Sloman, Lombrozo, and Malt [28] call an extra-strong ontology, wherein differences between any proposed domains are irreducible to other cognitive representations. In their analysis of domain-generality versus domain-specificity in higher-order cognition, they outline four more possible ontologies, each of which is decreasingly committed to a strong distinction amongst domains of knowledge (where no ontology is the 5th and alternate extreme). They argue for mild ontology, which they describe as follows: “Domain differences in categorization and inference are systematic, but not cognitively primitive. People tend to reason and classify phenomena using domain-general causal reasoning mechanisms. To the extent that domains correspond to causal discontinuities in the world, systematic differences between domains may emerge, and domains thus serve as a useful shorthand for theorists to roughly classify different types of processing. However, in a given classification or inference, an object is processed the way it is by virtue of its causal history and other causal roles, which will correlate imperfectly with its domain” (p. 201). This implies that mechanism domains are not parts of a pre-specified ontology, but rather, they emerge from our observations of causal regularities and thus we take our hypothesis to be a form of “mild ontology” in the language of Sloman, Lombrozo, and Malt [28]. We propose that mechanism domains constitute the fundamental representations that allow us to generate causal models and explanations quickly and effortlessly.

[END]

[1] Url: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0268219

(C) Plos One. "Accelerating the publication of peer-reviewed science."
Licensed under Creative Commons Attribution (CC BY 4.0)
URL: https://creativecommons.org/licenses/by/4.0/


via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/