Open up your laptop or desktop computer, or even your tablet or smartphone, and
strip it down to the basics. (Careful, this may void your warranty.) You'll find a
*motherboard*: a hard board with circuits printed on it. This board, also called a
*logic board*, is like the soil of a garden. Not a garden of vegetables, though—a
garden of chips. Now, the indigenous peoples of the Americas figured out long ago
that if you plant certain vegetables next to each other, they will enrich the soil
and help each other grow: for example, squash, corn, and beans—the "Three
Sisters". Look over your motherboard and you'll see a whole family of chips spread
throughout: RAM chips, ROM chips, a CMOS or two, EEPROMs for flash memory, and so
forth. Each of these has a specific job to do, and without it, your device would
be little more than an expensive brick. But like Sauron's ring, one chip rules
them all: the CPU, or *central processing unit*. Often called the "brain" of a
computer, it carries out billions of instructions per second, conducting a digital
orchestra that gives your device a life of its own.
The CPU in your phone or computer is based on abstract designs by pioneers such as
John von Neumann, who pioneered the idea of a *stored-program computer*, that is,
a computer that stores both data and instructions in its memory. That might sound
an obvious choice, but keep in mind that the earliest computers stored very
little, and programs were fed to them as stacks of punched cards. The
stored-program concept means that Instagram sits in your phone's flash memory
until you need to use it, along with all the data it needs to run. No punched
cards needed. The concept revolves around the CPU, which has different parts that
do certain jobs. For example, The control unit is the "conductor" that sends
signals to the different parts of the computer; the arithmetic and logic unit
(ALU) calculates by applying math and logical operations; and the registers carry
small pieces of data that the CPU needs to get the job done. The CPU steps through
the *fetch-decode-execute cycle* to get bits and bytes from memory, using
reference tables to look up what they all mean.
Now, this sounds nice so far, but let's dig the rabbit hole a little deeper and
get a little more abstract. We can think of that CPU as a concept, making a model
of it in our minds. Imagine a machine that reads and writes symbols on a strip of
tape, pulling the tape back and forth as needed across a read-write head, and
keeping track of its "state" as it goes—basically, remembering what it's doing.
This machine, called a *Turing machine*, models what that CPU can do. It can
handle data, do math and handle logic, and just about anything a computer should
be able to do. This machine was proposed by Alan Turing, a British computer
scientist whose contributions to the field earned him the descriptor of "father of
theoretical computer science". According to the *Church–Turing thesis*, any
function that is at all computable can be computed using a Turing machine:
> A function is said to be 'effectively calculable' if its values can be found by
> some purely mechanical process [...] We may take this statement literally,
> understanding by a purely mechanical process one which could be carried out by a
> machine. (Turing, A. M. (1938). *Systems of logic based on ordinals.*)
Later in his life, perhaps not satisfied with merely pulling paper tape around,
Turing opened the doors of philosophy by turning his attention to the question,
"Can machines think?" Turing concluded a 1948 report by describing a thought
experiment in which a machine might be taught to play chess:
> Now get three men A, B and C as subjects for the experiment. A and C are to be
> rather poor chess players, B is the operator who works the paper machine. ...
> Two rooms are used with some arrangement for communicating moves, and a game is
> played between C and either A or the paper machine. C may find it quite
> difficult to tell which he is playing. (Turing, A. M. (1948). *Intelligent Machinery.*)
The 1950 paper *Computing Machinery and Intelligence* tackled the topic in more
depth. Rather than trying to establish whether machines could think, Turing
reduced the problem to a question of *appearing* to think by imitating humans:
that is, showing intelligent behaviour roughly equivalent to a human's. He
proposed a thought experiment which he called the "imitation game", based on a
game in which a human judge is charged with telling which of two (hidden) players
was a man and which was a woman, based on written communication alone. In Turing's
version, one of the players is replaced with a computer with enough storage and
speed and suitable software, and Turing asks:
> Will the interrogator decide wrongly as often when the game is played like this
> as he does when the game is played between a man and a woman? These questions
> replace our original, "Can machines think?" (Turing, A. M. (1950).
> *Computing Machinery and Intelligence.*)
Turing's paper sparked a long-lasting debate about the nature of intelligence that
continues today, and helped to lay the foundation for the field of *artificial
intelligence*, or AI. Readers in 2025 would be hard pressed to spend an hour doing
anything online without encountering AI somehow, specifically in the form of
*large language models* (LLMs) and their use in *generative AI*. News outlets
report on these sensational intelligent machines that answer questions just like
humans do. Students who struggle with a search engine might turn to ChatGPT for
their research—or to generate entire essays and papers. Doomsayers variously
predict the end of high school English and even the end of the world as we know
it, imagining sentient machines punishing humans who dare to hold back their
development. But how can we evaluate what these "intelligent machines" produce? As
LLMs become ubiquitous and their use (and their impact) skyrockets, the breathless
praise and PR gives way to serious inquiry. Questions surface: How intelligent are
these machines, anyway? Are they really sentient, or on the road to sentience, as
some claim? Are the stories of robot overlords even founded? In other words: do
they live up to the hype?