This author’s view is that machines, in their current or immediately future forms, cannot have genuine intentional states. This view is held because, in order for machines to have genuine intentional states, or be conscious, we need to understand the causal requirements for consciousness. Our only reference case is our own biological foundation of consciousness. We know biology is causally related to consciousness because changes to biology cause changes in consciousness. Understanding the biological foundations of consciousness is no easy task, though, especially considering that we have no clear and agreed upon philosophical definition of consciousness. This gap in biological knowledge may very well need to be bridged before consciousness can be duplicated by a computer. However, it seems more than reasonable to think that machines would be able to simulate conscious and intentional behavior, long before we have a complete biological or philosophical description of consciousness.
Before any further support can be put forth, certain terminology needs brief explanation. Artificial Intelligence (AI) refers to computer programs that are built to solve problems creatively. Typically, AI connotes the replication of human cognitive powers in a computer. Strong AI, according to John Searle, implies that "the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states" (Dennett, 1991, p. 435). Strong AI can be considered a duplication of human consciousness. On the other hand, weak AI refers to an AI that simulates minds and only seems to be capable of understanding. There is a test, developed by and named after Alan Turing, which is traditionally seen as the way to determine whether a machine possesses intelligence. In the Turing test, a team of judges interacts with both AI and humans through textual communication. If the judges are not able to distinguish an AI
from the humans, then the AI is considered intelligent. Good old-fashioned AI (GOFAI) is an older approach to AI based on explicitly programming knowledge and behaviors. This approach runs on the assumption that complexity of knowledge is necessary for development of intelligence. Artificial Neural Networks (ANN) is a modern approach to AI based on modeling of human neural networks and has a general design based on statistical analysis of data. ANN are typically employed in the production of connectionist models which run on the assumption that complexity of data processing is necessary for development of intelligence. Syntax refers to the procedural rules in programming or grammatical rules for forming sentences in language. Semantics refers to the meaning of words and sentences.
## The Chinese Room
An early thought experiment was designed by John Searle in order to demonstrate his conviction that computer programs could not be conscious. The paper in which he introduced this argument began with two propositions: genuine intentionality in humans and animals is due to causal properties of the brain and that “instantiating a computer program is never by itself a sufficient condition of intentionality” (Searle 1980). The thought experiment is set out to support his second proposition and the thought experiment can be summarized as follows: consider an enclosed room with a man inside and an opening to exchange papers with the outside. This man does not understand the Chinese language to any degree. Inside this room, however, is a book, which allows the man to cross reference input – output relations of Chinese symbols. This book is designed in such a way that a person, on the outside, fluent in Chinese, would receive precisely the response he or she might expect from another fluent speaker
of Chinese. This room is designed in such a way that it duplicates the design of any given computer program and simultaneously passes the Turing test. Searle argues that we discover a truth, within this thought experiment, that no computer program could actually have genuine conscious understanding. Searle (1990) proposed the following formal logical argument to support his conclusions from the "Chinese Room" thought experiment:
(A1) Programs are formal (syntactic).
(A2) Minds have mental contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.
(C1) Programs are neither constitutive of nor sufficient for minds.
(A4) Brains cause minds.
(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.
(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.
This argument is a good starting place for discussions about machine intelligence and machine consciousness. The truth of the first axiom follows from the definition of computer programs (Searle 1997) and the truth of the second axiom is derived from everyday human existence – our thoughts require meaning in addition to syntactical structure. This allows us to restrict our discussion to the more controversial axiom that "syntax by itself is neither constitutive of nor sufficient for semantics." This axiom is derived directly from Searle's thought experiment. He claims that behavior is not a part of, nor sufficient for, understanding because a scenario could be constructed, in which a functional agent possesses no subjective understanding of meaning, and yet the agent performs behaviors perceived by a third party as possessing understanding. However, these behaviors would actually be formulaic responses based on input-output associations and this is what Searle denies as being constitutive of a min
d. The reason why a third party might recognize the agent in the room as understanding of meaning can be explained rather simply. The very construction of input-output associations requires subjective understanding in the first place and the “second order” understanding of the room causes an illusion of first order understanding. Therefore, in all likelihood, any perceived subjective understanding would be due to the mind of the individual who created the associations rather than being due to the mind of the agent, itself.
There have been some interesting replies to Searle’s thought experiment. Searle himself preemptively responded to what he considered standard replies against his argument. The system reply argues that the entire room, as a system, understands Chinese and that the man inside the room and the book are just part of the system itself. This reply is analogous to the way in which our brains work – our brain is a system of component parts, none of which understands meaning in its own right, but working together as a system, the parts of our brains produce understanding. Searle responds to the systems reply by removing the system from the thought experiment. He asks us to imagine a man who has memorized all of the rules within the book and who can roam around the world freely, communicating in Chinese, but never understanding Chinese. Searle argues that the book has become a virtual program running on the hardware of this man’s brain and that this doesn’t change the fundamental point of his original ar
gument – that no purely syntactically defined program can produce mental states.
Another reply, called the brain simulator reply, asks us to consider a program that simulates, with perfect detail, the operation of the brain of a person fluent in Chinese. The brain simulator reply posits that simulation of the brain is sufficient for a computer to possess mental states. Searle argues against this reply two-fold. Firstly, Strong AI operates on the premise that minds can be independent of brains; so arguing for Strong AI based on simulating the brain seems contradictory. Secondly, a brain simulation would only simulate the formal structure of the brain and not the specific causal properties of the brain. Consequently, this brain simulator would not possess genuine mental states. In addition, Searle applies this same logic to refute consciousness in connectionist networks that simulate the parallel and distributed structure of the brain by stating that “any function that can be computed on a parallel machine can also be computed on a serial machine” (Searle 1990).
What might be the best argument against Searle’s Chinese Room thought experiment, however, comes from Paul and Patricia Churchland in what has come to be known as the speed reply. In their 1990 Scientific American article, an accompaniment to Searle’s article in the same issue, they provide a complementary thought experiment that reduces Searle’s thought experiment to absurdity. They ask us to imagine a dark room inside of which is a man with a magnet attempting to wave the magnet around to cause light. It seems to follow that from intuition that no man could wave a magnet and produce light. And so, they propose that 1) electricity and magnetism are forces, 2) the essential property of light is luminance, and 3) forces by themselves are neither constitutive of nor sufficient for luminance, therefore electricity and magnetism are neither constitutive of nor sufficient for light (Churchland and Churchland 1990). Nevertheless, we do know that light is caused by electromagnetic waves. The reason that
a man waving a magnet cannot produce light in a dark room is that the waves need to be produced at frequencies on the order of hundreds of trillions of waves per second. Since it follows from common sense that a man waving a magnet cannot produce light, but circumstances could be arranged so that a magnet being waved by a machine could produce light, we should not rule out the possibility that our common sense intuition is wrong. If our common sense intuition can be wrong, then any conclusions drawn from that intuition have a shaky basis. The Churchlands argue, and on this point I agree, that Searle’s argument against AI rests entirely upon intuition and ignorance, in the form of his third axiom.
## Can machines be conscious?
Ultimately, Searle argues that any machine based on pre-established, formulaic responses could not be conscious – “if I do not understand Chinese solely on the basis of implementing a computer program for understanding Chinese, then neither does any other digital computer solely on that basis, because no digital computer has anything I do not have” (Searle, 1997, p. 11). His argument would carry further to say that any machine, no matter how perfectly it simulates conscious human behavior, would not be conscious unless it was produced by a duplicate of the human brain. The Churchlands do agree with Searle that GOFAI is very likely destined to fail in reaching its goal, although, they do think that connectionist approaches to AI will succeed. This argument certainly assumes a materialistic viewpoint - specifically one in which the mind is a product of or identical to the human brain. Presumably, strict dualism would deny the possibility for strong AI out of the principle that we could not create a
metaphysical mind from physical stuff. Epiphenomenalism, however, rests on the assumption that the mind is an emergent phenomenon of the brain. If the mind emerges from the brain, then all that is required for a machine to produce a real mind is for it to replicate the function of the brain; this is precisely what Searle believes.
Another common argument against Strong AI comes from the belief that souls are required for conscious minds and that machines do not have souls, therefore no machine can have a conscious mind (Blackmore 200). This argument, however, makes the unfounded philosophical claim that a soul is a requirement for a conscious mind. We do not know that such a relationship between a soul and consciousness exists. What we do know about consciousness, though, is limited to human experience, especially through its extensive relationship with human biology.
Let us assume that AI research advances to the point that we can produce an AI that could pass the Turing test. How would we be able to determine if this AI was conscious? There is a set of conditions for consciousness that we could adapt for this purpose: possessing brain structure, nonverbal behavioral evidence, ability to use language, ability to communicate, ability to learn, ability to solve problems, and creativity (Gennaro, 1996, p. 50). We can consider consciousness as occurring on a spectrum from a rock to human consciousness or beyond. Since our hypothetical AI has already passed the Turing test, we know that it has the ability to communicate and use language. For arguments sake, let us also assume that it possesses creativity, problem-solving skills, and can learn from experience.
At this point, we might be tempted to claim that the AI is somewhere very near humans on this spectrum of consciousness. However, this leaves us with one condition that hasn’t been addressed: brain structure. According to Searle, our AI would appear as if it was as conscious as any given human, but in reality it would be no more conscious than a rock. According to the Churchlands, if this AI had been modeled upon human brain structure in such a way that 1) it possessed parallel processing, 2) was fault tolerant and functionally persistent, 3) was able to store large amounts of information in a distributed manner, and 4) did not strictly “manipulate symbols according to structure-sensitive rules” then it may actually be conscious (Churchland and Churchland, 1990, ps. 35-36). In other words, for the Churchlands (and Searle), the degree to which an AI copies the brain determines the possibility for consciousness. Therefore, fundamentally, the question of whether a machine could ever be conscious is a
question of the differences between biological and non-biological entities.
If consciousness is a property unique to biological organisms with a certain level of complexity, then Strong AI can never be possible (Blackmore, 2004, p. 201). However, if consciousness is a property that can be had by purely non-biological formations, then Strong AI is a possibility. In this case, Strong AI might only be possible once we understand the causal foundations of human minds, as Searle argues. For now, we can replace small functional areas of the nervous system with non-biological electrical systems in order to preserve (if only in limited form) certain functions such as vision, hearing, and potentially even memory formation. Technologies such as this require accurate models that can only be built with adequate knowledge. Accordingly, duplicating human minds (creating Strong AI) would require a massive expansion of biological knowledge.
## The not so hard problem
While it may or may not be possible to create a Strong AI, it is certainly feasible to create a Weak AI. Working models of human cognition are the only requirements for producing a Weak AI. Scientists continue to produce compelling models of human cognition and neurological explanations for many human abilities. An interesting postulate in neuropsychology is that normally functioning human brains store and process syntactical knowledge in discrete, separate locations. This position has been supported with converging evidence from comparative studies of syntax in language and music (Patel, 2003, p. 676). While, on the surface, this may seem unsurprising, the traditional approach in connectionist models (utilizing ANN) of syntax has been to include representation and processing in one network rather than in discrete areas. This exemplifies the relationship between the creation of AI and knowledge of neuropsychology and cognitive models.
Cognitive models are precisely the sorts of thing that can be simulated with computers and, ultimately, simulation of intelligence is of more practical concern than the duplication of consciousness. Weak AI is only limited by computational power and our knowledge of how minds work. Would it be practical to produce a Strong AI with which we would have to concern ourselves with its feelings, desires, and beliefs and to which we would have to behave with ethical and moral consideration? Additionally, we would have an extremely difficult time proving that the machine was truly conscious, considering the difficulty we have in proving the consciousness of other biological species. As addressed earlier, we may not even be able to distinguish between a sufficiently advanced Weak AI and a Strong AI leaving us with the possibility that both or neither versions are philosophical zombies.
## What would a conscious machine be like?
If we were able to create genuinely conscious machines, what would they be like? The eventual characteristics of any AI will depend, in a large part, upon the disposition of its creators. There may be certain aspects of conscious experience that are left inaccessible to the AI, it may have externally set directives or inclinations, and it might have other limitations, technical or otherwise. Alternatively, an AI might be left to develop itself independent of external direction. There has been much debate as to the implications of independent AI. A common fear is that AI will surpass human abilities and become hostile or a detriment to human wellbeing, in some way.
A duplication of human intelligence with genuine intentional states would certainly be something to regard with caution. New ethical, moral, and legal issues would arise. For example, a machine that could feel and experience intentional states might be harmed, in the ethical sense, by being treated like any other machine. A robot might have the ability to intervene in public displays of incivility or prevent criminal activity. There is a possibility that a robotic AI could be given authority over human beings. This already occurs in some institutions that serve the elderly - robots deliver medicine to residents – which already have created ripples in the social fabric. Sometimes, even with today’s limited form of robotics, jobs done by humans are being delegated to robots. This has resulted in a mixed reception – jobs are lost, however, new technical positions are created and different opportunities are made accessible to former workers. There are many cases where robots save lives or provide
a valuable service. It seems very likely that AI with genuine intentional states would be able to advance the capacity of humanity to adapt to nature.
## WORKS CITED
Blackmore, Susan. Consciousness (Oxford University Press, 2004)
Churchland, Paul M. and Churchland, Patricia S. “Could a machine think?” Scientific American 262:32-37 (January 1990).
Dennett, Daniel C. Consciousness Explained (Back Bay Books, 1991).
Gennaro, Rocco J. Mind and brain: a dialogue on the mind-body problem (Hackett Publishing Company, 1996)
Patel, Aniruddh D. “Language, music, syntax and the brain.” Nature Neuroscience. 6(7):674-681 (2003).
Searle, John “Minds, brains, and programs.” Behavioral and Brain Sciences. 3(3):417-457 (1980)
Searle, John “Is the Brain’s Mind a Computer Program?”, Scientific American 262: 26–31 (January 1990).
Searle, John The Mystery of Consciousness (New York Review of Books, 1997).