There’s a huge difference between AI and human intelligence—so let’s stop comparing them

depositphotos

" data-medium-file="https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?fit=300%2C225&ssl=1" data-large-file="https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?fit=696%2C522&ssl=1" class="alignnone size-full wp-image-3387" src="https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=696%2C522&ssl=1" alt="Robot human hands touching" width="696" height="522" srcset="https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?w=3000&ssl=1 3000w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=300%2C225&ssl=1 300w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=768%2C576&ssl=1 768w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=1024%2C768&ssl=1 1024w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=80%2C60&ssl=1 80w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=265%2C198&ssl=1 265w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=696%2C522&ssl=1 696w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=1068%2C801&ssl=1 1068w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=560%2C420&ssl=1 560w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=1920%2C1440&ssl=1 1920w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?resize=600%2C450&ssl=1 600w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?w=1392&ssl=1 1392w, https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2018/08/Depositphotos_10302732_xl-2015.jpg?w=2088&ssl=1 2088w" sizes="(max-width: 696px) 100vw, 696px" />
Image credit: depositphotos

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

These days, it’s easy to believe arguments that artificial intelligence has become as smart as the human mind—if not smarter. Google released a speaking AI that dupes its conversational partners that it’s human. DeepMind, a Google subsidiary, created an AI that defeated the world champion at the most complicated board game. More recently, AI proved it can be as accurate as trained doctors in diagnosing eye diseases.

And there are any number of stories that warn about a near future where robots will drive all humans into unemployment.

Everywhere you look, AI is conquering new domains, tasks and skills that were previously thought to be the exclusive domain of human intelligence. But does it mean that AI is better than the human mind?

The answer to that question is: It’s wrong to compare artificial intelligence to the human mind, because they are totally different things, even if their functions overlap at times.

Artificial intelligence is good at processing data, bad at thinking in abstract

blockchain-data-driven-world

Even the most sophisticated AI technology is, at its core, no different from other computer software: bits of data running through circuits at super-fast rates. AI and its popular branch, machine learning and deep learning, can solve any problem as long as you can turn it into the right data sets.

Take image recognition. If you give a deep neural network, the structure underlying deep learning algorithms, enough labeled images, it can compare their data in very complicated ways and find correlations and patterns that define each type of object. It then uses that information to label objects in images it hasn’t seen before.

The same process happens in voice recognition. Given enough digital samples of a person’s voice, a neural network can find the common patterns in the person’s voice and determine if future recordings belong to that person.

Everywhere you look, whether it’s a computer vision algorithm doing face recognition or diagnosing cancer, an AI-powered cybersecurity tool ferreting out malicious network traffic, or a complicated AI project playing computer games, the same rules apply.

The techniques change and progress: Deep neural networks enable AI algorithms to analyze data through multiple layers; generative adversarial networks (GAN) enable AI to create new data based on the data set it has trained on; reinforcement learning enables AI to develop its own behavior based on the rules that apply to an environment… But what remains the same is the same basic principle: If you can break down a task into data, AI will be able to learn it.

Take note, however, that designing AI models is a complicated task that few people can accomplish. Deep learning engineers and researchers are some of the most coveted and highly paid experts in the tech industry.

Where AI falls short is thinking in the abstract, applying common sense, or transferring knowledge from one area to another. For instance, Google’s Duplex might be very good at reserving restaurant tables and setting up appointments with your barber, two narrow and very specific tasks. The AI is even able to mimic natural human behavior, using inflections and intonations as any human speaker would. But as soon as the conversation goes off course, Duplex will be hard-pressed to answer in a coherent way. It will either have to disengage or use the help of a human assistant to continue the conversation in a meaningful way.

There are many proven instances in which AI models fail in spectacular and illogical ways as soon as they’re presented with an example that falls outside of their problem domain or is different from the data they’ve been trained on. The broader the domain, the more data the AI needs to be able to master it, and there will always be edge cases, scenarios that haven’t been covered by the training data and will cause the AI to fail.

An example is self-driving cars, which are still struggling to become fully autonomous despite having driven tens of millions of kilometers, much more than a human needs to become an expert driver.

Humans are bad at processing data, good at making abstract decisions

artificial brain strong AI

Let’s start with the data part. Contrary to computers, humans are terrible at storing and processing information. For instance, you must listen to a song several times before you can memorize it. But for a computer, memorizing a song is as simple as pressing “Save” in an application or copying the file into its hard drive. Likewise, unmemorizing is hard for humans. Try as you might, you can’t forget bad memories. For a computer, it’s as easy as deleting a file.

When it comes to processing data, humans are obviously inferior to AI. In all the examples iterated above, humans might be able to perform the same tasks as computers. However, in the time that it takes for a human to identify and label an image, an AI algorithm can classify one million images. The sheer processing speed of computers enable them to outpace humans at any task that involves mathematical calculations and data processing.

However, humans can make abstract decisions based on instinct, common sense and scarce information. A human child learns to handle objects at a very young age. For an AI algorithm, it takes hundreds of years’ worth of training to perform the same task.

 

 

For instance, when humans play a video game for the first time in their life, they can quickly transfer their everyday life knowledge into the game’s environment, such as staying away pits, ledges, fire and pointy things (or jumping over them). They know they must dodge bullets and avoid getting hit by vehicles. For AI, every video game is a new, unknown world it must learn from scratch.

Humans can invent new things, including all the technologies that have ushered in the era of artificial intelligence. AI can only take data, compare it, come up with new combinations and presentations, and predict trends based on how previous sequences.

Humans can feel, imagine, dream. They can be selfless or greedy. They can love and hate, they can lie, they forget, they confuse facts. And all of those emotions can change their decisions in rational or irrational ways. They’re imperfect and flawed beings made of flesh, which decays with time. But every single one of them is unique in his or her own way and can create things that no one else can.

AI is, at its core, is tiny bursts of electricity running through billions of lifeless circuits.

Let’s stop comparing AI with human intelligence

artificial intelligence

None of this means that AI is superior to the human brain, or vice versa. They point is, they’re totally different things.

AI is good at repetitive tasks that have clearly defined boundaries and can be represented by data, and bad at broad tasks that require intuition and decision-making based on incomplete information.

In contrast, human intelligence is good for settings where you need common sense and abstract decisions, and bad at tasks that require heavy computations and data processing in real time.

Looking at it from a different perspective, we should think about AI as augmented intelligence. AI and human intelligence complement each other, making up for each other’s shortcomings. Together, they can perform tasks that none of them could have done individually.

For instance, AI is good at perusing huge amounts of network traffic and pointing out to anomalies, but can make mistakes when deciding which ones are the real threats that need investigation. A human analyst, on the other hand, is not very good at monitoring gigabytes of data going through a company’s network, but they’re adept at relating anomalies to different events and figuring out which ones are the real threats. Together AI and human analysts can fill each other’s gaps.

Now, what about all those articles that claim human labor is going instinct? Well, a lot of it is hype, and the facts prove that the expansion of AI is creating more jobs than it is destroying. But it’s true that it will obviate the need for humans in many tasks, just as every technological breakthrough has done in the past. But that’s probably because those jobs were never meant for humans. We were spending precious human intelligence and labor on those jobs because we hadn’t developed the technologies to automate them yet.

As AI becomes adept at performing more and more tasks, we as humans will find more time to put our intelligence to real use, at being creative, being social, at arts, sports, literature, poetry and all the things that are valuable because the human element and character that goes into them. And we’ll use our augmented intelligence tools to enhance those creations.

The future will be one where artificial and human intelligence will build together, not apart.

7 COMMENTS

  1. Although correct as at right this second, this article misses the point completely. The point is that AI is rapidly encroaching on aspects of human behaviour that were considered impossible not very long ago.

    Creativity. Strategy. Imagination. All have been demonstrated. The domains might be limited _right now_ but soon won’t be.

    “we should think about AI as augmented intelligence. AI and human intelligence complement each other, making up for each other’s shortcomings.”

    This statement sums up the authors bias. This was the mantra in chess not very long after Kasparov lost to Deep Blue, and indeed for a while, human+computer teams were the best in the world.

    Computers kept improving, humans didn’t. Thats why people are discussing this so much, because its something that needs to be discussed, not magic’d away with a “It’s fine right now, so will continue to be fine forever, because humans are da best”.

  2. Ben > It’s wrong to compare artificial intelligence to the human mind, because they are totally different things, even if their functions overlap at times.

    True. I only wrote a proof of this earlier this year and it hasn’t been published.

    Ben > Where AI falls short is thinking in the abstract

    Yes, this is true.

    Greg > Creativity. Strategy. Imagination. All have been demonstrated. The domains might be limited _right now_ but soon won’t be.

    This is not true. Nothing based on computational theory will ever match human intelligence, regardless of processing power or memory size.

    But, there is more to this. Although computational theory is incapable of handling comprehension or genuine problem solving, this does not mean that it is restricted to biological agents like us. A true, general theory of cognition includes all forms of this whether it is animal, human, machine, or alien. So, if a theory like this were available then machine intelligence of human level would be possible. It is possible that such a theory will be completed and published in just a few years.

    The confusion is understandable. Most people use the term ‘AI’ generically to include anything non-biological. And, people generally assume that it would be computer based since those are the most complex machines. But AI and AGI (human-level intelligence) are two different things. You can’t build AGI using only a computer, a Turing Machine, computational theory, or language theory. This simply isn’t possible. However, there is nothing special about biological intelligence. You should be able to duplicate it once you understand how it works.

  3. The whole article is comparing and contrasting AI and human intelligence from beginning to end. That’s a very useful process – so thanks! Why stop?

  4. Even though I can not quite agree to everything stated above, I find that the article is very fascinating, because it brings up some key aspects about humans and machines. The question I find myself thinking about after reading it is: Will there ever be AGI? Not in a sense, that AI will not develop into something far more capable than it is right now, but will humans be able to “keep” it at their level for any amount of time? I mean if you look at the obvious advantages of computers and humans, if you manage to combine the two, how should there be something, that is still equal in capability? If you take the immense processing power of a computer and combine with the ability to make logic connections, i.e. not having to learn by huge amounts of information, but by being able to make direct and logic connections without having to be fed with examples, won’t this combination of characteristics lead directly to something far more capable (storing and processing large amounts of data and being able to make rational anticipations), without stopping at the limited intelligence of humans? For that matter, I can agree that we should stop anticipating a machine being able to do the same as us humans since there is essentially no way we could control it from there on (or is there?!). I.e. shouldn’t we also think about the necessary limitations when talking about AI?

  5. Completely misses the essence: the cortex is also a data processing system and is extremely good and fast at it. People with this limited knowlegde level with regards to ”data” should not be writing fluff like this.
    The way the cortex stores, retrieves, compares and predict data and datastreams is vastly different than what computers nowadays do. But once understood, any universal turing machine will be able to mirror the behaviour of the cortex and thus the human mind.

  6. The processes in deep learning are actually similar to the processes going on in the brain if you are willing to compare the abstract nature of their behavior. As stated in the article, Deep Learning compares thousands or millions of examples to a species under scrutiny. It then asks, “Is this object or experience similar or different? By how much? In the human brain, abstract connections are dictated by chemicals that can instantaneously compare the “current experience chemical cocktail”(this is created by your body’s experiences from the 5 senses) to the cocktail from the “experience of reference”(a known experience). Chemical cocktails have automatic addressing to your brain like a compass that knows how to point towards North. Either it matches or it doesn’t. Your brain compares the experiences and looks for an explanation of differences. It is then that humans are more advanced(at the moment) because we can infer what the experience is and change track to a completely different subject(interpretation of experience).

  7. Great article. Think the point is not that “comparing” them should not be done…but rather equating them or talking about them as if they were the same thing. You will NEVER be able to convey authentic freedom to an AI “creation”…freedom such as a human being has.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.