Do machines have minds?

The book of minds
Image credit: 123RF (with modifications)

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence.

What you’ll read a lot in the media is that advances in artificial intelligence have reached a point where it is becoming increasingly difficult to tell the difference between humans and machines.

But to say it more precisely, the superficial similarities between human and artificial intelligence are making it difficult to see the underlying differences that remain persistent. I’ve been thinking about this issue a lot recently as I’ve been following the controversy surrounding Google LaMDA and AI sentience, the growing capacity of deep learning systems in outmatching humans in complicated games, and the use of generative models in creating stunning artwork.

Obviously, current AI systems are nothing like the human mind. Despite their spectacular feats, their flaws become easy to spot when they’re pitted against the versatility and flexibility of human intelligence.

But neither can we dismiss them as being stupid. So how do we examine and define the growing “cognitive” (if that’s the proper term) abilities that we see in our machines today?

The Book of Minds, a book by Philip Ball, helped me better adjust my perception of the fast-evolving world of artificial intelligence. And for me, the key was first to change my perception of “minds.” The book, which talks about all kinds of minds, from humans to animals to machines and extraterrestrials, gives you a framework for looking past your instinctive tendency to view things through the lens of your own mind and experience.

Defining the mind

Brain activity
Image credit: 123RF

One of the recurring themes in Ball’s book is the fallacy of the anthropocentric view of minds, which is to evaluate other agents (living or not) in the image of our own minds. This false view can go in different directions. On one end of the spectrum, we dismiss all other organisms as stupid because they don’t have a mind like ours. On the other end, we anthropomorphize everything, from pets to cars to computers, interpreting their behavior in terms of human intelligence.

“Our habit of treating animals as though they are dim-witted humans explains a great deal about our disregard for their well-being; giving them fully fledged, Disneyfied human minds is only the flipside of the same coin,” Ball writes in The Book of Minds.

This habit has stripped us of the ability to see these other minds for what they really are. First, we must accept that there are different types of minds, and there is a “space of possible minds” with multiple dimensions. Second, we must move away from the “pre-Copernican” view of the mindspace, which puts humans at its center. We can, however, try to guess how close other minds are to ours based on their properties across the dimensions of the mindspace.

And finally, we must define what “mind” and “mindedness” are—which itself is a difficult task.

As Ball puts it, “‘mind’ is one of those concepts – like intelligence, thought, and life – that sounds technical (and thus definable) but is in fact colloquial and irreducibly fuzzy. Beyond our own mind (and what we infer thereby about those of our fellow humans), we can’t say for sure what mind should or should not mean.”

Nonetheless, Ball settles on the following definition, which is fashioned after a theory by philosopher Thomas Nagel:

For an entity to have a mind, there must be something it is like to be that entity.

Although this is an ambiguous definition, it can be helpful. “The only mind we know about is our own, and that has experience. We don’t know why it has experience, but only that it does. We don’t even know quite how to characterize experience, but only that we possess it. All we can do in trying to understand mind is to move cautiously outwards, to see what aspects of our experience we might feel able (taking great care) to generalize,” Ball writes.

The book of minds cover
The Book of Minds” by Philip Ball

We must also acknowledge that not everything has a mind, of it takes a level of complexity in an organism to be something like it. For example, we can attribute a mind to a dog or a chimpanzee or (maybe) a fly. But does it make sense to consider fungus to be minded? Ball stresses that minds are not all-or-nothing entities but matters of degrees.

“I don’t think it’s helpful to ask if something has a mind or not, but rather, to ask what qualities of mind it has, and how much of them (if any at all),” Ball writes.

In The Book of Minds, Ball suggests different ways to map the mindspace, including along dimensions such as experience, agency, intelligence, consciousness, and intentionality. While delving into each of these concepts is beyond the scope of this article, what is clear is that mindedness spans several dimensions, and no single mind excels at every one of them.

We must, however, be careful about what kind of attributes we fit into the mindedness platform, Ball warns, because “the well-motivated wish to escape from old prejudices about our own species constantly risks becoming an exercise in pulling other organisms up to our own level.”

“It can often look like another ‘Mind Club’ affair: whom do we admit (on the grounds of having minds deemed to be a bit like ours), and whom do we turn away at the door?” Ball writes.

Evidently, some animals exceed us in specific skills, such as motor functions, auditory acuity, navigational prowess, short-term memory, location memory, etc. On the other hand (and we’ll get to this in a bit), AI and computers can surpass us in tasks such as mathematical calculations and solving problems that require brute-force number crunching. But while these are criteria we often use to measure intelligence in humans, it tells us nothing about the mind of other beings.

“The question posed by [Thomas] Nagel still stands as a challenge: what do cognitive capacities of this kind imply for the subjective what-it-is-to-be-like of the animal mind?” Ball notes.

AI and embodied intelligence

data and algorithms
Image credit: 123RF

Ball notes that “much of the discussion about the perils of artificial intelligence has been distorted by our insistence on giving machines (in our imaginations) minds like ours.”

For example, reporters, analysts, and even scientists often speak about AI systems such as AlphaZero and GPT-3 as if speaking about humans. This is largely influenced by the “computational view” of the brain, in which the human brain is viewed as a data processing machine. Scientists have designed computers based on this notion. In turn, advances in computation have reinforced the computational view of the mind.

One of the characteristics of the computational view is that it becomes agnostic of the physical substrate of the mind. As long as a thing can manifest the data-processing functions of a mind, it can be considered to have a mind of sorts.

Ball refutes this view of the mind. “My hunch is that no genuine mind will be like today’s computers, which sit there inactive until fed some numbers to crunch,” he writes. “It’s certainly notable that evolution has never produced a mind that works like such a computer. That’s not to say it necessarily cannot – evolution is full of commitment bias to the solutions it has already found, so we have little notion of the full gamut it can generate – but my guess is that such a thing would not prove very robust.”

Ball believes that embodiment plays an important role in shaping the mind, an opinion that is shared by many scientists. In every species, the evolution of the mind, brain, and nervous system are all tied together. Furthermore, in most organisms, the brain and nervous system grow along with the body. And the brain’s learning modes and capacities shift as the body ages. Therefore, the mind cannot be considered a piece of software that is installed on the brain.

“The interaction [between the mind and brain] is much more fluid than that. You could say that the world the mind builds is predicated on the possibility of doing something to it (which includes doing something to or with one’s own body). Thus the kind of world that the mind builds depends on the kind of actions it can take,” Ball writes. “In other words, a central feature of the mind is that it is embodied. It constructs a world based on assumptions and deductions about what interventions the organism might be capable of.”

Do machines have minds?

Human mind vs artificial intelligence

Interestingly, while Ball’s book is dedicated to giving readers a broader view of what different minds look like, he doesn’t think computers have minds.

“Let’s be clear: every robot and computer ever built (on this planet) is mindless. I feel slightly uneasy, in asserting this, that I might be merely repeating the bias against animal minds that I have so deplored,” he writes. “All the same I think the statement is true, or would at least be generally regarded as such by experts in the field.”

He goes on to say that current AI assistants don’t feel bad when you mock them, and robots don’t need moral rights. The main reason for dismissing machine minds, in Ball’s opinion, is that differences between the principles of logic-circuit design and the speculative theories of consciousness are too vast.

“Computers today, for all their staggering capabilities, remain closer to complicated arrays of light switches than even to rudimentary insect brains,” Ball writes, a statement that will no doubt be popular among many scientists today.

He does, however, state two caveats: First, there is no guarantee that something mindlike could not arise in a device built from something other than the wetware found in organic brains (e.g., electronic components). And second is that we don’t know for sure how today’s AI works, a statement that is at least true for very large neural networks.

From this, he concludes that we should metaphorically consider current AI systems as “a collection of proto-minds” like the nervous systems of early Ediacaran organisms.

“We do not know if they are really the kind of stuff that can ever host genuine minds – that there can ever be something it is like to be a machine. We can and should, however, begin to think about today’s computers and AIs in cognitive terms: to treat them conceptually (if not morally) as if they are minds,” he writes.

If we go back to the “embodied mind” theory, then we must conclude that the mind that today’s AI has is very different from ours. Furthermore, we must accept that the direction AI—in particular machine learning—is currently headed is not toward replicating the human mind. And this gap will not be bridged by scaling today’s neural network architectures and throwing more data and compute power at them.

Ball quotes cognitive scientist and AI expert Josh Tenenbaum and other AI experts as saying, “We believe that future generations of neural networks will look very different from the current state-of-the-art. They may be endowed with intuitive physics, theory of mind, causal reasoning, and other [human-like] capacities.”

We’re reaching a point where our AI systems are increasingly capable of handling complicated tasks. In some cases, AI systems are being given sensitive tasks, such as driving cars, making financial and hiring decisions, and diagnosing patients. Having the right perspective on the AI “mind” will help us better understand what kind of tasks we can trust them with and where they reach their limits.

“While we lack any grasp of the reasoning at work in AI, we’re more likely to indulge our habit of anthropomorphizing, of projecting human-like cognition into places it doesn’t exist,” Ball writes. “But it’s one thing to attribute mind to dumb matter. It’s another, perhaps more dangerous, thing to attribute the wrong kind of mind to systems that genuinely possess some sort of cognitive capacity. Our machines don’t think like us – so we’d better get to know them.”

2 COMMENTS

  1. Thanks for the great review. Using the nebulous term ‘mind’ to speak about minds leaves me deeply begging for clarity. Can Ball explain how we are able to consciously perceive the color red even though colors do not exist anywhere in the universe? No he can’t. There are only EM frequencies in the environment and neuronal spikes in the brain. No colors. The neurons in the visual cortex that trigger the conscious sensation of red when firing are identical to those that trigger blue and green sensations. And it has nothing to do with neuronal circuitry since an electric probe inserted directly in the occipital lobe can bypass all the circuitry of the eye and the thalamus and still trigger the same sensations.

    It is obvious to me that a non-physical (not immaterial) entity is needed to explain conscious sensations. Some have taken to calling these sensations ‘qualia’ but assigning a label to them is not an explanation, regardless of the desperate wishes of many in the scientific community.

    Philip Ball is not saying anything that has not been said a million times before by materialists and other soul-deniers. They all derisively dismiss the soul as the undetectable ghost in the machine as if mockery was a scientific argument. At the same time, they have no qualm in believing in the existence of undetectable dark matter and dark energy, the galactic ghosts of the cosmos. They explain nothing about the conscious mind. It’s all superstition and guesswork. They are merely promoting an agenda. It’s all deceptive propaganda intended to persuade the masses to join their religion. I, for one, am not deceived.

    I know that I will not read this book. Thank you for writing this review and spare me the effort and the time.

  2. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.