By Rich Heimann

The “brain in a jar” is a thought experiment of a disembodied human brain living in a jar of sustenance. The thought experiment explores human conceptions of reality, mind, and consciousness. This article will explore a metaphysical argument against artificial intelligence on the grounds that a disembodied artificial intelligence, or a “brain” without a body, is incompatible with the nature of intelligence.[1]
The brain in a jar is a different inquiry than traditional questions about artificial intelligence. The brain in a jar asks whether thinking requires a thinker. The possibility of artificial intelligence primarily revolves around what is necessary to make a computer (or a computer program) intelligent. In this view, artificial intelligence is possible if we can understand intelligence and figure out how to program it into a computer.
The 17th-century French philosopher René Descartes deserves much blame for the brain in a jar. Descartes was combating materialism, which explains the world, and everything in it, as entirely made up of matter.[2] Descartes separated the mind and body to create a neutral space to discuss nonmaterial substances like consciousness, the soul, and even God. This philosophy of the mind was named cartesian dualism.[3]
Dualism argues that the body and mind are not one thing but separate and opposite things made of different matter that inexplicitly interact.[4] Descartes’s methodology to doubt everything, even his own body, in favor of his thoughts, to find something “indubitable,” which he could least doubt, to learn something about knowledge is doubtful. The result is an exhausted epistemological pursuit of understanding what we can know by manipulating metaphysics and what there is. This kind of solipsistic thinking is unwarranted but was not a personality disorder in the 17th century.[5]

There is reason to sympathize with Descartes. Thinking about thinking has perplexed thinkers since the Enlightenment and spawned odd philosophies, theories, paradoxes, and superstitions. In many ways, dualism is no exception.
It wasn’t until the early 20th century that dualism was legitimately challenged.[6],[7] So-called behaviorism argued that mental states could be reduced to physical states, which was nothing more than behavior.[8] Aside from the reductionism that results from treating humans as behaviors, the issue with behaviorism is that it ignores mental phenomenon and explains the brain’s activity as producing a collection of behaviors that can only be observed. Concepts like thought, intelligence, feelings, beliefs, desires, and even hereditary and genetics are eliminated in favor of environmental stimuli and behavioral responses.
Consequently, one can never use behaviorism to explain mental phenomena since the focus is on external observable behavior. Philosophers like to joke about two behaviorists evaluating their performance after sex: “It was great for you, how was it for me?” says one to the other.[9],[10] By concentrating on the observable behavior of the body and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about intelligence.

This is the reason why behaviorists fail to define intelligence.[11] They believe there is nothing to it.[12] Consider Alan Turing’s eponymous Turing Test. Turing dodges defining intelligence by saying that intelligence is as intelligence does. A jar passes the Turing Test if it fools another jar into believing it is behaving intelligently by responding to questions with responses that seem intelligent. Turing was a behaviorist.
Behaviorism saw a decline in influence that directly resulted in the inability to explain intelligence. By the 1950s, behaviorism was largely discredited. The most important attack was delivered in 1959 by American linguist Noam Chomsky. Chomsky excoriated B.F. Skinner’s book Verbal Behavior.[13], [14] A review of B. F. Skinner’s Verbal Behavior is Chomsky’s most cited work, and despite the prosaic name, it has become better known than Skinner’s original work.[15]
Chomsky sparked a reorientation of psychology toward the brain dubbed the cognitive revolution. The revolution produced modern cognitive science, and functionalism became the new dominant theory of the mind. Functionalism views intelligence (i.e., mental phenomenon) as the brain’s functional organization where individuated functions like language and vision are understood by their causal roles.
Unlike behaviorism, functionalism focuses on what the brain does and where brain function happens.[16] However, functionalism is not interested in how something works or if it is made of the same material. It doesn’t care if the thing that thinks is a brain or if that brain has a body. If it functions like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time.
The American philosopher and computer scientist Hilary Putnam evolved functionalism in Psychological Predicates with computational concepts to form computational functionalism.[17],[18] Computationalism, for short, views the mental world as grounded in a physical system (i.e., computer) using concepts such as information, computation (i.e., thinking), memory (i.e., storage), and feedback.[19],[20],[21] Today, artificial intelligence research relies heavily on computational functionalism, where intelligence is organized by functions such as computer vision and natural language processing and explained in computational terms.
Unfortunately, functions do not think. They are aspects of thought. The issue with functionalism—aside from the reductionism that results from treating thinking as a collection of functions (and humans as brains)—is that it ignores thinking. While the brain has localized functions with input–output pairs (e.g., perception) that can be represented as a physical system inside a computer, thinking is not a loose collection of localized functions.
John Searle’s famous Chinese Room thought experiment is one of the strongest attacks on computational functionalism. The former philosopher and professor at the University of California, Berkley, thought it impossible to build an intelligent computer because intelligence is a biological phenomenon that presupposes a thinker who has consciousness. This argument is counter to functionalism, which treats intelligence as realizable if anything can mimic the causal role of specific mental states with computational processes.
The irony of the brain in a jar is that Descartes would not have considered “AI” thinking at all. Descartes was familiar with the automata and mechanical toys of the 17th century. However, the “I” in Descartes’s dictum “I think, therefore I am,” treats the human mind as non-mechanical and non-computational. The “cogito” argument implies that for thought, there must also be a subject of that thought. While dualism seems to grant permission for the brain in a jar by eliminating the body, it also contradicts the claim that AI can ever think because any thinking would lack a subject of that thinking, and any intelligence would lack an intelligent being.
Hubert Dreyfus explains how artificial intelligence inherited a “lemon” philosophy.[22] The late professor of philosophy at the University of California, Berkeley, Dreyfus was influenced by phenomenology, which is the philosophy of conscious experience.[23],[24],[25],[26] The irony, Dreyfus explains, is that philosophers came out against many of the philosophical frameworks used by artificial intelligence at its inception, including behaviorism, functionalism, and representationalism which all ignore embodiment.[27],[28],[29] These frameworks are contradictory and incompatible with the biological brain and natural intelligence.
To be sure, the field of AI was born at an odd philosophical hour. This has largely inhibited progress to understand intelligence and what it means to be intelligent.[30],[31] Of course, the accomplishments within the field over the past seventy years also show that the discipline is not doomed. The reason is that the philosophy adopted most frequently by friends of artificial intelligence is pragmatism.
Pragmatism is not a philosophy of the mind. It is a philosophy that focuses on practical solutions to problems like computer vision and natural language processing. The field has found shortcuts to solve problems that we misinterpret as intelligence primarily driven by our human tendency to project human quality onto inanimate objects. The failure of AI to understand, and ultimately solve intelligence, shows that metaphysics may be necessary for AI’s supposed destiny. However, pragmatism shows that metaphysics is not necessary for real-world problem-solving.
This strange line of inquiry shows that real artificial intelligence could not be real unless the brain in a jar has legs, which spells doom for some arbitrary GitHub repository claiming artificial intelligence.[32] It also spells doom for all businesses “doing AI” because, aside from the metaphysical nature is an ethical question that would be hard, if not impossible, to accomplish without declaring your computer’s power cord and mouse as parts of an intelligent being or animal experimentation required for attaching legs and arms to your computers.
About the author
Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.
[1] For example, in 2017, François Chollet discussed the implausibility of a so-called intelligence explosion. He concluded that the expansion of intelligence will not come from “merely tuning the gears of some brain in a jar in isolation.” François Chollet, “The Implausibility of Intelligence Explosion,” Medium, Nov. 2017.
[2] American philosopher and professor emeritus at New York University Thomas Nagel argues that materialism cannot explain life because it cannot explain thought. Nagel doesn’t show that materialism cannot explain thought but is at least true that materialism hasn’t explained thought. Thomas Nagel, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False (Oxford University Press, 2012).
[3] I could not find Descartes ever referring to his philosophy of the mind as cartesian dualism and it is now more commonly called substance dualism.
[4] The thinking stuff is what Descartes called res cogitans which unlike the body does not exist in time or space. Descartes claimed that “there is a great difference between mind and body, inasmuch as body is by nature always divisible, and the mind is entirely indivisible… the mind or soul of man is entirely different from the body.”
[5] The most significant obstacle for dualism was explaining how the mental could interact with the physical. According to Descartes, all mental and physical interactions occurred within the pineal gland located in the brain through the so-called “animal spirit.” Still, this fails to explain how nonmaterial thought causes the material body to act. Dualism created a new problem known as the mind-body problem. In a series of letters, Princess Elisabeth of the Palatine asked Descartes to clarify how the immaterial mind could cause the material body to act. Unfortunately, communication between the two over several years produced no answer. Shapiro, Lisa, “Elisabeth, Princess of Bohemia“, The Stanford Encyclopedia of Philosophy (Fall 2021 Edition), Edward N. Zalta.
[6] The issue with materialism was that it was not a theory of the mind. After all, what material and what mental phenomenon does materialism declare?
[7] Epiphenomenalism was a brief attempt to explain Princess Elizabeth’s critique by arguing that no interaction exists between mental and physical states. Epiphenomenalism sees mental states as reasons for physical states, not causes. Other alternatives have been suggested including psychophysical parallelism, which explains interactions between the mental and physical as perfectly coordinated events without any causal interaction and occasionalism, which explains all interaction between the mental and physical as mediated by God. Both create more problems than they solve.
[8] Behaviorism was a reaction to 19th-century philosophy of the mind which focused on the unconscious, and psychoanalysis which was difficult to test experimentally and thus make predictions.
[9] Philosophy of Mind, Lecture #3 UC Berkeley Philosophy 132, Spring 2011. YouTube. University of California at Berkeley, 2011.
[10] Hilary Putnam provides a more serious counterexample named Super-Spartans which is an attack on behaviorism. Putnam, Hilary. “Brains and Behavior” Volume I Readings in Philosophy of Psychology, Volume I edited by Ned Block, 24-36. Cambridge, MA and London, England: Harvard University Press, 2013.
[11] For example, Legg and Hutter settled on the following definition for artificial intelligence: “The goal of machine intelligence as an autonomous, goal-seeking system; [for which] intelligence measures an agent’s ability to achieve goals in a wide range of environments.” Like all definitions, this explains artificial intelligence by sidestepping the whole thorny topic of intelligence; merely restating intelligence as a measure of an “agent’s ability to achieve goals in a wide range of environments.” S. Legg and M. Hutter, “Universal Intelligence: A Definition of Machine Intelligence,” Minds & Machines 17 (2007): 391–444.
[12] György Buzsáki in his book The Brain from Inside Out explains that the brain is not passively absorbing stimuli. Rather, it is actively searching through alternative possibilities to test various options. Buzsáki concludes that the brain does not represent information: it constructs it. Buzsáki, György. The Brain from Inside Out, Oxford University Press, 2019.
[13] B.F. Skinner, (1957). Verbal Behavior.
[14] Noam Chomsky, “A review of B. F. Skinner’s Verbal Behavior,” Language 35 (1959): 26–58.
[15] Skinner argued that language was acquired through behaviorist reinforcement learning, whereas Chomsky argued that language was innate and evolved.
[16] ANI is a form of functionalism. Like functionalism, ANI cares about the function, rather than what the material the solution is made of.
[17] Putnam, Hilary (1967). “Psychological Predicates”. In Capitan, W. H.; Merrill, D. D. (eds.). Art, Mind, and Religion. Pittsburgh: University of Pittsburgh Press. pp. 37–48.
[18] Putnam saw computational descriptions to dissolve the mind-body problem.
[19] Twenty years after first publishing Psychological Predicates Putnam would reverse course and view all computational descriptions of intelligence as trivial. He concluded that computational descriptions could be applied to any mental state of any physical thing.
[20] Representation and Reality. Putnam, Hilary (1988). Representation and Reality. Cambridge, Massachusetts: MIT Press.
[21] This was Putnam’s notion of “multiple realizability.” You can find examples of AI being compared to puppies, chimps, cockroaches, rats, and cats as though human intelligence is an advanced case of animal intelligence, and all animals are interchangeable. Such examples are self-serving and reflect whatever the research wants to support. For example, if you “made a computer out of old beer cans powered by windmills; if it had the right program, it would have to have a mind.” Searle, John R. (1984), “1984 Reith Lectures-Minds, Brains and Science: 2. Beer cans and meat machines,” TheListener112 (15 November 1984): 14-16.
[22] Conversations with History: Hubert Dreyfus. YouTube. UCTV, 2008.
[23] Dreyfus relied heavily on Martin Heidegger and Maurice Merleau-Ponty.
[24] Thomas Nagel emphasized phenomenology to explain consciousness. Thomas Nagel, “What Is It Like to Be a Bat?” The Philosophical Review. 83 (4): 435–450. (1974) doi:10.2307/2183914.
[25] Dreyfus was a fierce opponent of symbolic representation and expert systems. He correctly pointed out that expert systems could not successfully solve intelligence because the brain does not create rules or perfect representations of objects. Expert systems were very popular in the early days of artificial intelligence and are a type of symbolic artificial intelligence (i.e., non-statistical artificial intelligence) that uses expert knowledge, rules, and heuristics to support human decision-making. The so-called “father of expert systems,” Edward Feigenbaum, described Dreyfus’s phenomenological framework as “cotton candy.” The 1994 Turing Award winner and Stanford University professor may be right. However, Dreyfus’s argument that the brain learns how to see the world directly has lasted the test of time. Pamela McCorduck. Machines Who Think. CRC Press, 1979. (pg. 196).
[26] Feigenbaum’s most notable contribution was MYCIN. MYCIN diagnosed blood infections using four-hundred and fifty rules. Buchanan, Bruce G., and Edward H. Shortliffe. Rule-Based Expert Systems: MYCIN. Reading, MA: Addison-Wesley, 1984.
[27] Wittgenstein, Ludwig, and G. E. M. Anscombe. 1953. Philosophical investigations.
[28] Martin Heidegger and Dennis J. Schmidt. Being and Time. Albany: State University of New York Press. 2010.
[29] Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 1–19; Searle, J. R. (1990). “Is the Brain’s Mind a Computer Program?” Scientific American, (January), 26–31; Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press.
[30] Philosopher Hubert Dreyfus argued that computers could not acquire intelligence because they don’t have a body. He meant that humans are more than brains, and an important part of intelligence cannot be implemented into a computer without a body. Dreyfus, H. L. (1992). What Computers Still Can’t Do: A Critique of Artificial Reason. MIT Press) (pg. 173, 235, 248, and 250); which built upon earlier work: Hubert L. Dreyfus, “Alchemy and Artificial Intelligence,” Santa Monica, CA: RAND Corporation, 1965, & Hubert L. Dreyfus, What Computers Can’t Do (Cambridge, MA: MIT Press, 1972).
[31] Ludwig Wittgenstein concentrates on the confusion of machines described in human terms in The Blue and Brown Books. Wittgenstein famously wrote that asking if a “machine thinks seem somehow nonsensical. It is as though we had asked ‘Has the number 3 a color?’”
[32] In fact, arms and legs may not be enough. When philosophers say that intelligence is a biological phenomenon that presupposed consciousness what they mean is that a room, vat, or jar with legs, feet, arms, and hands would not be enough. The room, vat, or jar would need to be human. Though some like Hans Moravec believed that arms and legs would be enough. I think he was serious when he wrote, “If we could graft a robot to a reasoning program, we wouldn’t need a person to provide the meaning anymore: it would come from the physical world.” Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. (p. 272) New York, NY: Basic Books. ISBN 0-465-02997-3.