(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .
Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms [1]
['Larissa Albantakis', 'Department Of Psychiatry', 'University Of Wisconsin', 'Madison', 'Wisconsin', 'United States Of America', 'Leonardo Barbosa', 'Fralin Biomedical Research Institute At Vtc', 'Virginia Tech', 'Roanoke']
Date: 2023-11
In this section, we apply the mathematical framework of IIT 4.0 to several example systems. The goal is to illustrate three critical implications of IIT’s postulates:
The following examples will feature very simple networks constituted of binary units U i ∈ U with for all U i and a logistic (sigmoidal) activation function (60) where k > 0 and (61) In Eq (60), the parameter k defines the slope of the logistic function and allows one to adjust the amount of noise or determinism in the activation function (higher values signify a steeper slope and thus more determinism). The units U i can thus be viewed as noisy linear threshold units with weighted connections among them, where k determines the connection strength.
As in Figs 1 and 2, units denoted by uppercase letters are in state ‘1’ (ON, depicted in black), units denoted by lowercase letters are in state ‘−1’ (OFF, depicted in white). Cause–effect structures are illustrated as geometrical shapes projected into 3D space (Fig 6). Distinctions are depicted as mechanisms (black labels) tying a cause (red labels) and an effect (green labels) through a link (orange edges, thickness indicating φ d ). Relation faces of second- and third-degree relations are depicted as edges or triangular surfaces between the causes and effects of the related distinctions. While edges always bind pairs of distinctions (a second-degree relation), triangular surfaces may bind the causes and effects of two or three distinctions (second- or third-degree relation). Relations of higher degrees are not depicted.
Each panel shows the network’s causal model and weights on the left. Blue regions indicate complexes with their respective φ s values. In all networks, k = 4 and the state is Abcdef. The Φ-structure(s) specified by the network’s complexes are illustrated to the right (with only second- and third-degree relation faces depicted) with a list of their distinctions for smaller systems and their ∑φ values for those systems with many distinctions and relations. All integrated information values are in ibits. (A) A degenerate network in which unit A forms a bottleneck with redundant inputs from and outputs to the remaining units. The first-maximal complex is Ab, which excludes all other subsets with φ s > 0 except for the individual units c, d, e, and f. (B) The modular network condenses into three complexes along its fault lines (which exclude all subsets and supersets), each with a maximal φ s value, but low Φ, as the modules each specify only two or three distinctions and at most five relations. (C) A directed cycle of six units forms a six-unit complex with φ s = 1.74 ibits, as no other subset is integrated. However, the Φ-structure of the directed cycle is composed of only first-order distinctions and few relations. (D) A specialized lattice also forms a complex (which excludes all subsets), but specifies 27 first- and high-order distinctions, with many relations (>1.5 × 10 6 ) among them. Its Φ value is 11452 ibits. (E) A slightly modified version of the specialized lattice in which the first-maximal complex is Abef. The full system is not maximally irreducible and is excluded as a complex, despite its positive φ s value (indicated in gray).
All examples were computed using the “iit-4.0” feature branch of PyPhi [37]. This branch will be available in the next official release of the software. An example notebook available here recreates the analysis of Fig 1 (identifying complexes), Fig 2 (computing distinctions), and Fig 4 (computing relations).
Taken together, the examples in Fig 6 demonstrate that the connectivity among the units of a system has a strong impact on what set of units can constitute a complex and thereby on the structure integrated information it can specify. The examples also demonstrate the role played by the various requirements that must be satisfied by a substrate of consciousness: existence (causal power), intrinsicality, specificity, maximal irreducibility (integration and exclusion), and composition (structure).
A similar situation may occur in the brain. The brain as a whole is undoubtedly integrated (not to mention that it is integrated with the body as a whole), and neural “traffic” is heavy throughout. However, its anatomical organization may be such that a subset of brain regions, arranged in a dense 3D lattice primarily located in posterior cortex, may achieve a much higher value of integrated information than any other subset. Those regions would then constitute the first complex (the “main complex,” [ 4 ]), and the remaining regions might condense into a large number of much smaller complexes.
Finally, Fig 6E shows a network of six units, four of which (Abef) constitute a specialized lattice that corresponds to the first complex. Though integrated, the full set of 6 units happens to be slightly less irreducible (φ s = 0.15) than one of its 4-unit subsets (φ s = 0.27). From the extrinsic perspective, the 6-unit system undoubtedly behaves as a highly integrated whole (nearly as much as its 4-unit subset), one that could produce complex input–output functions due to its rich internal structure. From the intrinsic perspective of the system, however, only the 4-unit subset satisfies all the postulates of existence, including maximal irreducibility (accounting for the definite nature of experience). In this example, the remaining units form a second complex with low φ s and serve as background conditions for the first complex.
In the brain, a large part of the cerebral cortex, especially its posterior regions, is organized as a dense, divergent-convergent hierarchical 3D lattice of specialized units, which makes it a plausible candidate for the substrate of human consciousness [ 4 , 11 , 51 , 52 ]. Note that directed cycles originating and ending in such lattices typically remain excluded from the first-maximal complex because minimal partitions across such cycles yield a much lower value of φ s compared to minimal partitions across large lattices.
Preliminary work indicates that lattices of specialized units, implementing different input–output functions, but partially overlapping in their inputs (receptive field) and outputs (projective fields), are particularly well suited to constituting large substrates that unfold into extraordinarily rich Φ-structures. The number of distinctions specified by an optimally connected, specialized system is bounded above by 2 n −1, and that of the relations among as many distinctions is bounded by . The structure integrated information of such structures is correspondingly large [ 50 ].
Fig 6D shows a network consisting of six heterogeneously connected units—a “specialized” lattice, again with k = 4. While many subsystems within the specialized network have positive values of system integrated information φ s , the full 6-unit system is the maximal substrate (excluding all its subsets from being maximal substrates). Out of 63 possible distinctions, the Φ-structure comprises 27 distinctions with causes and effects congruent with the system’s maximal cause–effect state. Consequently, the full 6-unit system also specifies a much larger number of causal relations compared to the copy cycle system.
The brain is rich in partially segregated, directed cycles, such as those originating in cortical areas, sequentially reaching stations in the basal ganglia and thalamus, and cycling back to cortex [ 48 , 49 ]. These cycles are critical for carrying out many cognitive and other functions, but they do not appear to contribute directly to experience [ 4 ].
Highly deterministic directed cycles can easily be extended to constitute large complexes, being more irreducible than any of their subsets. However, the lack of cross-connections (“chords” in graph-theoretic terms) greatly limits the number of components of the Φ-structures specified by the complexes, and thus their structure integrated information (Φ). (Note also that increasing the number of units that constitute the directed cycle would not change the amount of φ s specified by the network as a whole.).
Fig 6C shows a directed cycle in which six units are unidirectionally connected with weight w = 1.0 and k = 4. Each unit copies the state of the unit before it, and its state is copied by the unit after it, with some indeterminism. The copy cycle constitutes a 6-unit complex with a maximal φ s = 1.74 ibits. However, despite the “large” substrate, the Φ-structure it specifies has low structure integrated information (Φ = 7.65). This is because the system’s Φ-structure is composed exclusively of first-order distinctions, and consequently of a small number of relations.
Note that fault lines can be due not just to neuroanatomy but also to neurophysiological factors. For example, during early slow-wave sleep, the dense interconnections among neuronal groups in cerebral cortical areas may break down, becoming causally ineffective due to the bistability of neuronal excitability. This bistability, brought about by neuromodulatory changes [ 46 ], is associated with the loss of consciousness [ 47 ].
Fig 6B shows a network comprising three weakly interconnected modules, each having two strongly connected units (k = 4). In this case, the weak inter-module connections are clear fault lines. Properly normalized, partitions along these fault lines separating modules yield values of φ s that are much smaller than those yielded by partitions that cut across modules. As a consequence, the 6-unit system condenses into three complexes (Ab, cd, and ef), as determined by their maximal φ s values. Again, because the modules are small, their Φ values are low. Intriguingly, a brain region such as the cerebellum, whose anatomical organization is highly modular, does not contribute to consciousness [ 44 , 45 ], even though it contains several times more neurons than the cerebral cortex (and is indirectly connected to it).
Notably, the organization of the cerebral cortex, widely considered as the likely substrate of human consciousness, is characterized by extraordinary specialization of neural units at all levels [ 38 – 40 ]. Moreover, if the background conditions are well controlled, neurons are thought to interact in a highly reliable, nearly deterministic manner [ 41 – 43 ].
The causes and effects of the causal distinctions for the two types of complexes are shown in the middle, and the corresponding cause–effect structures are illustrated on the right. In this case, degeneracy (coupled with indeterminism) undermines the ability of the maximal substrate to grow in size, which in turn limits the richness of the Φ-structure that can be supported. Because of the bottleneck architecture, the current state of candidate system Abcdef has many possible causes and effects, leading to an exponential decrease in selectivity (the conditional probabilities of cause and effect states). This dilutes the value of intrinsic information (ii) for larger subsets of units, which in turn reduces their value of system integrated information φ s . Consequently, the maximal substrates are small, and their Φ values are necessarily low.
Fig 6A shows a network with medium indeterminism (k = 4) and high degeneracy, due to the fact that unit A forms a “bottleneck” with inputs and outputs to and from the remaining units. The network condenses into one complex of two units Ab and four complexes corresponding to the individual units c, d, e, and f (also called “monads”).
The first set of examples highlights how the organization of connections among units impacts the ability of a substrate to support a cause–effect structure with high structure integrated information (high Φ). Fig 6 shows five systems, all in the same state s = Abcdef with the same number of units, but with different connectivity among the units.
In Fig 7C , we see what happens if unit E, instead of just turning inactive (OFF) is inactivated (abolishing its cause–effect power because it no longer has any counterfactual states and thus cannot be intervened upon). In this case, all the distinctions and relations to which that unit contributes as a mechanism would cease to exist (its compound Φ-fold collapses). Moreover, all the distinctions and relations to whose purviews that unit contributes—its purview Φ-fold—would also collapse or change. In fact, the complex shrinks because it cannot include that unit. With respect to the neural substrate of consciousness, this means that while an inactive unit contributes to a different experience, an inactivated unit ceases to contribute to experience altogether. The fundamental difference between inactive and inactivated units leads to the following corollary of IIT: unlike a fully inactivated substrate which, as would be suspected, cannot support any experience, an inactive substrate can. If a maximal substrate in an inactive state is in working order and specifies a large Φ-structure, it will support a highly structured experience, such as the experience of empty space [ 11 ] or the feeling of “pure presence” (see (14) in S1 Notes ).
If we change the state of unit E from ON to OFF (in neural terms, the unit becomes inactive), the distinctions that the unit contributes to when ON, as well as the associated relations, may change ( Fig 7B ). In the case illustrated by the Figure, what changes are the purviews and irreducibility of several distinctions and associated relations, the number of distinctions stays the same, φ s changes only slightly, but the number of relations is lower, leading to a lower Φ value. In other words, what a single unit contributes to intrinsic existence is not some small “bit” of information. Instead, a unit contributes an entire sub-structure, composed of a very large number of distinctions and relations. The set of distinctions to which a subset of units contributes as a mechanism, either alone or in combination with other units, together with their associated relations, forms a compound Φ-fold. With respect to the neural substrate of consciousness in the brain, this means that even a change in the state of a single unit is typically associated with a change in an entire Φ-fold within the overall Φ-structure, with a corresponding change in the structure of the experience. (Note, however, that in larger systems such changes will typically be less extreme, see also [ 11 ].).
In all panels, the same causal model and weights are shown on the left, but in different states. For all networks k = 4. The set of distinctions D s), their causes and effects, and their φ d values are shown in the middle. The Φ-structure specified by the network’s complex is illustrated on the right (again with only second- and third-degree relation faces depicted). All integrated information values are in ibits. (A) The system in state ABcdE is a complex with 23 out of 31 distinctions and Φ = 22.26. (B) The same system in state ABcde, where unit E is inactive (“OFF”) also forms a complex with the same number of distinctions, but a somewhat lower Φ value due to a lower number of relations between distinctions. In addition, the system’s Φ-structure differs from that in (A), as the system now specifies a different set of compositional causes and effects. (C) If instead of being inactive, unit E is inactivated (fixed into the “OFF” state), the inactivated unit cannot contribute to the complex or Φ-structure anymore. The complex is now constituted of four units (ABcd), with only 14 distinctions and markedly reduced structure integrated information (Φ = 3.35).
A substrate exerts cause–effect power in its current state. For the same substrate, changing the state of even one unit may have major consequences on the distinctions and relations that compose its Φ-structure: many may be lost, or gained, and many may change their value of irreducibility (φ d and φ r ).
The examples also show that the overall system dynamics, while often revealing relevant aspects of a system’s architecture, typically do not and cannot exhaust the richness of its current cause–effect structure. For example, a system in a fixed point is dynamically “dead” (and “does” nothing), but it may be phenomenally quite “alive,” for example, experiencing “pure presence” (see (14) in S1 Notes ). Of course, the system’s causal powers can be fully unfolded, and revealed dynamically, by extensive manipulations and observations of subsets of system units because they are implicitly captured by the system’s causal model and ultimately by its transition probability matrix [ 29 ].
This dissociation between phenomenal and functional equivalence has important implications. As we have seen, a purely feed-forward system necessarily has φ s = 0. Therefore, it cannot support a cause–effect structure and cannot be conscious, whereas systems with a recurrent architecture can. On the other hand, the behavior (input–output function) of any (discrete) recurrent system can also be implemented by a system with a feed-forward architecture [ 54 ]. This implies that any behavior performed by a conscious system supported by a recurrent architecture can also be performed by an unconscious system, no matter how complex the behavior is. More generally, digital computers implementing programs capable of artificial general intelligence may in principle be able to emulate any function performed by conscious humans and yet, because of the way they are physically organized, they would do so without experiencing anything, or at least anything resembling, in quantity and quality, what each of us experiences [ 20 ] (see also (15) in S1 Notes ).
These examples illustrate a simple scenario of functional equivalence of three systems characterized by a different architecture. The equivalence is with respect to a simple input–output function, in this case coin counting, which they multiply realize. The systems are also equivalent in terms of their global system dynamics, in the sense that they go through a globally equivalent sequence of internal states. However, because of their different substrates, the three systems specify different cause–effect structures. Therefore, based on the postulates of IIT, they are not phenomenally equivalent. In other words, they are equivalent in what they do extrinsically, but not in what they are intrinsically.
For consistency in the causal powers analysis, in all three cases, the global state “0” that activates the output unit if I = 1 is selected such that it corresponds to the binary state “all OFF” (000), which is followed by 1 ≔ 100 and 2 ≔ 010. Also, the Φ-structure of each system is unfolded in state 1 ≔ 100 in all three cases.
In addition to being functionally equivalent in their outward behavior, the three systems share the same internal global dynamics, as their internal states update according to the same global state-transition diagram ( Fig 8B ). Given an input I = 1, the system updates its state, cycling through all its 8 global states (labeled 0–7) over 8 updates. For an input of I = 0, the system remains in its present state. Moreover, all three systems are constituted of three binary units whose joint states map one-to-one onto the systems’ global state labels (0–7). However, the mapping is different for different systems ( Fig 8C , left). This is because the internal binary update sequence depends on the interactions among the internal units [ 29 , 53 ], which differ in the three cases, as can easily be determined through manipulations and observations.
(A) The input–output function realized by three different systems (shown in (C)): a count of eight instances of input I = 1 leads to output O = 1. (B) The global state-transition diagram is also the same for the three systems: if I = 0, the systems will remain in their current global state, labeled as 0–7; if I = 1, the systems will move one state forward, cycling through their global states, and activate the output if S = 0. (C) Three systems constituted of three binary units but differing in how the units are connected and interact. As a consequence, the one-to-one mapping between the 3-bit binary states and the global state labels differ. However, all three systems initially transition from 000 to 100 to 010. Analyzed in state 100, the first system (top) turns out to be a single complex that specifies a Φ-structure with six distinctions and many relations, yielding a high value of Φ. The second system (middle) is also a complex, with the same φ s value, but it specifies a Φ-structure with fewer distinctions and relations, yielding a lower value of Φ. Finally, the third system (bottom) is reducible (φ s = 0) and splits into three smaller complexes (entities) with minimal Φ-structures and low Φ.
Fig 8 shows three simple deterministic systems with binary units (here the “OFF” state is 0, and “ON” is 1) that perform the same input–output function, treating the internal dynamics of the system as a black box. The function could be thought of, for example, as an electronic tollbooth “counting 8 valid coins” (8 times input I = 1) before opening the gate [ 53 ]. Each system receives one binary input (I) and has one binary output (O). The output unit switches “ON” on a count of eight positive inputs I = 1 (when the global state with label ‘0’ is reached in the cycle), upon which the system resets ( Fig 8A ).
By the intrinsicality postulate, the Φ-structure of a complex depends on the causal interactions between system subsets, not on the system’s interaction with its environment (except for the role of the environment in triggering specific system states). In general, different physical systems with different internal causal structure may perform the same input–output functions.
Conclusions
IIT attempts to account for the presence and quality of consciousness in physical terms. It starts from the existence of experience, and proceeds by characterizing its essential properties—those that are immediate and irrefutably true of every conceivable experience (axioms). These are then formulated as essential properties of physical existence (postulates), the necessary and sufficient conditions that a substrate must satisfy to support an experience—to constitute a complex. Note that “substrate” is meant in purely operational terms—as a set of units that a conscious observer can observe and manipulate. Likewise, “physical” is understood in purely operational terms as cause–effect power—the power to take and make a difference.
The postulates can be assessed based purely on a substrate’s transition probability matrix, as was illustrated by a few idealized causal models. Thus, a substrate of consciousness must be able to take and make a difference upon itself (existence and intrinsicality), it must be able to specify a cause and an effect state that are highly informative and selective (information), and it must do so in a way that is both irreducible (integration) and definite (exclusion). Finally, it must specify its cause and effect in a structured manner (composition), where the causal powers of its subsets over its subsets compose a cause–effect structure of distinctions and relations—a Φ-structure. Thus, a complex does not exist as such but only “unfolded” as a Φ-structure—an intrinsic entity that exists for itself, absolutely, rather than relative to an external observer.
As shown above, these requirements constrain what substrates can and cannot support consciousness. Substrates that lack in specificity, due to indeterminism and/or degeneracy, cannot grow to be large complexes. Substrates that are weakly integrated, due to architectural or functional fault lines in their interactions, are less integrated than some of their subsets. Because they are not maximally irreducible, they do not qualify as complexes. This is the case even though they may “hang together” well enough from an extrinsic perspective (having a respectable value of φ s ). Furthermore, even substrates that are maximally integrated may support Φ-structures that are extremely sparse, as in the case of directed cycles. Based on the postulates of IIT, a universal substrate ultimately “condenses” into a set of disjoint (non-overlapping) complexes, each constituted of a set of macro or micro units.
The physical account of consciousness provided by IIT should be understood as an explanatory identity: every property of an experience should ultimately be accounted for by a property of the cause–effect structure specified by a substrate that satisfies its postulates, with no additional ingredients. The identity is not between two different substances or realms—the phenomenal and the physical—but between intrinsic (subjective) existence and extrinsic (objective) existence. Intrinsic existence is immediate and irrefutable, while extrinsic existence is defined operationally as cause–effect power discovered through observation and manipulation. The primacy of intrinsic existence (of experience) in IIT contrasts with standard attempts at accounting for consciousness as something “generated by” or “emerging from” a substrate constituted of matter and energy and following physical laws.
The physical correspondent of an experience is not the substrate as such but the Φ-structure specified by the substrate in its current state. Therefore, minor changes in the substrate state can correspond to major changes in the specified Φ-structure. For example, if the state of a single unit changes, an entire Φ-fold within the Φ-structure will change, and if a single inactive unit is inactivated, its associated Φ-fold will collapse, even though the current state of the substrate appears the same (Fig 7).
Each experience corresponds to a Φ-structure, not a set of functions, processes, or computations. Said otherwise, consciousness is about being, not doing [1, 29, 55]. This means that systems with different architectures may be functionally equivalent—both in terms of global input–output functions and global intrinsic dynamics—but they will not be phenomenally equivalent. For example, a feed-forward system can be functionally equivalent to a recurrent system that constitutes a complex, but feed-forward systems cannot constitute complexes because they do not satisfy maximal irreducibility. Accordingly, artificial systems powered by super-intelligent computer programs, but implemented by feed-forward hardware or encompassing critical bottlenecks, would experience nothing (or nearly nothing) because they have the wrong kind of physical architecture, even though they may be behaviorally indistinguishable from human beings [20].
Even though the entire framework of IIT is based on just a few axioms and postulates, it is not possible in practice to exhaustively apply the postulates to unfold the cause–effect power of realistic systems [32, 56]. It is not feasible to perform all possible observations and manipulations to fully characterize a universal TPM, or to perform all calculations on the TPM that would be necessary to condense it exhaustively into complexes and unfold their cause–effect power in full. The number of possible systems, of system partitions, of candidate distinctions—each with their partitions and relations—is the result of multiple, nested combinatorial explosions. Moreover, these observations, manipulations, and calculations would need to be repeated at many different grains, with many rounds of maximizations. For these reasons, a full analysis of complexes and their cause–effect structure can only be performed on idealized systems of a few units [37].
On the other hand, we can simplify the computation considerably by using various assumptions and approximations, as with the “cut one” approximation described in [37]. Also, while the number of relations vastly exceeds the number of units and of distinctions (its upper bound for a system of n units is ), it can be determined analytically, and so can ∑φ r for a given set of distinctions S3 Text. Developing tight approximations, as well as bounded estimates of a system’s integrated information (φ s and Φ), is one of the main areas of ongoing research related to IIT [50].
Despite the infeasibility of an exhaustive calculation of the relevant quantities and structures for a realistic system, IIT already provides considerable explanatory and predictive power in many real-world situations, making it eminently testable [4, 57, 58]. A fundamental prediction is that Φ should be high in conscious states, such as wakefulness and dreaming, and low in unconscious states, such as dreamless sleep and anesthesia. This prediction has already found substantial support in human studies that have applied measures of complexity inspired by IIT to successfully classify subjects as conscious vs. unconscious [4, 22, 23, 59]. IIT can also account mechanistically for the loss of consciousness in deep sleep and anesthesia [4, 47]. Furthermore, it can provide a principled account of why certain portions of the brain may constitute an ideal substrate of consciousness and others may not, why the borders of the main complex in the brain should be where they are, and why the units of the complex should have a particular grain (the one that yields a maximum of φ s ). A stringent prediction is that the location of the main complex, as determined by the overall maximum of φ s within the brain, should correspond to its location as determined through clinical and experimental evidence. Another prediction that follows from first principles is that constituents of the main complex can support conscious contents even if they are mostly inactive, but not if they are inactivated [4, 11]. Yet another prediction is that the complete inactivation of constituents of the main complex should lead to absolute agnosia (unawareness that anything is missing).
IIT further predicts that the quality of experience should be accounted for by the way the Φ-structure is composed, which in turn depends on the architecture of the substrate specifying it. This was demonstrated in a recent paper showing how the fundamental properties of spatial experiences—those that make space feel “extended”—can be accounted for by those of Φ-structures specified by 2D grids of units, such as those found in much of posterior cortex [11]. This prediction is in line with neurological evidence of their role in supporting the experience of space [11]. Ongoing work aims at accounting for the quality of experienced time and that of experienced objects (see (16) in S1 Notes). A related prediction is that changes in the strength of connections within the neural substrate of consciousness should be associated with changes in experience, even if neural activity does not change [60]. Also, similarities and dissimilarities in the structure of experience should be accounted for by similarities and dissimilarities among Φ-structures and Φ-folds specified by the neural substrate of consciousness.
While the listed predictions may appear largely qualitative in nature, many of them rest on specific features of the accompanying quantitative analysis. This is the case for predictions regarding the borders (and grain) of the main complex in the brain, which depend on the relative φ s values of potential substrates of interest, and even more so for predictions regarding the quality and richness of certain experiences and the predicted features of their underlying substrates. IIT’s postulates, and the mathematical framework proposed to evaluate them, rest on “inferences to a good explanation” (Box 1). While we have aimed for maximal consistency, specificity, and simplicity at every junction in formulating IIT’s mathematical implementation, some of the algorithmic choices remain open to further evaluation. These include, for example, the proper treatment of background conditions and the resolution of ties given symmetries in the TPMs of specific systems (see S1 Text). More generally, further validation of IIT will depend on a systematic back-and-forth between phenomenology, theoretical inferences, and neuroscientific evidence [1].
In addition to empirical work aimed at validating the theory, much remains to be done at the theoretical level. According to IIT, the meaning of an experience is its feeling—whether those of spatial extendedness, of temporal flow, or of objects, to name but a few (“the meaning is the feeling”). This means that every meaning is identical to a sub-structure within a current Φ-structure—a content of experience—whether it is triggered by extrinsic inputs or it occurs spontaneously during a dream. Therefore, all meaning is ultimately intrinsic. Ongoing work aims at providing a self-consistent explanation of how intrinsic meanings can capture relevant features of causal processes in the environment (see (17) in S1 Notes). It will also be important to explain how intersubjectively validated knowledge can be obtained despite the intrinsic and partially idiosyncratic nature of meaning.
To the extent that the theory is validated through empirical evidence obtained from the human brain, IIT can then offer a plausible inferential basis for addressing several questions that depend on an explicit theory of consciousness. As indicated in the section on phenomenal and functional equivalence, and argued in ongoing work [20], one consequence of IIT is that typical computer architectures are not suitable for supporting consciousness, no matter whether their behavior may resemble ours. By the same token, it can be inferred from IIT that animal species that may look and behave quite differently from us may be highly conscious, as long as their brains have a compatible architecture. Other inferences concern our own experience and whether it plays a causal role, or is simply “along for the ride” while our brain performs its functions. As recently argued, IIT implies that we have true free will—that we have true alternatives, make true decisions, and truly cause. Because only what truly exists (intrinsically, for itself) can truly cause, we, rather than our neurons, cause our willed actions and are responsible for their consequences [18].
Finally, an ontology that is grounded in experience as intrinsic existence—an intrinsic ontology—must not only provide an account of subjective existence in objective, operational terms, but also offer a path toward a unified view of nature—of all that exists and happens. One step in this direction is the application of the same postulates that define causal powers (existence) to the evaluation of actual causes and effects (“what caused what” [10]). Another is to unify classical accounts of information (as communication and storage of signals) with IIT’s notion of information as derived from the properties of experience—that is, information as causal, intrinsic, specific, maximally irreducible, and structured (meaningful) [8] (see also (18) in S1 Notes). Yet another is the study of the evolution of a substrate’s causal powers as conditional probabilities that update themselves [61].
Even so, there are many ways in which IIT may turn out to be inadequate or wrong. Are some of its assumptions, including those of a discrete, finite set of “atomic” units of cause–effect power, incompatible with current physics [32, 62] (but see [63–66])? Are its axiomatic basis and the formulation of axioms as postulates sound and unique? And, most critically, can IIT survive the results of empirical investigations assessing the relationship between the quantity and quality of consciousness and its substrate in the brain?
[END]
---
[1] Url:
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011465
Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/