A brown recluse spider is not intelligent but it can kill you.
So Musk thinks we need to enhance our own intelligence digitally in order to compete with the AI that we are also creating, so they don’t destroy us. Musk is joined by Bill Gates and Stephen Hawking raising the alarm bells about the dangers of AI.
On the other end of the spectrum are Ray Kurzweil, Mark Zuckerberg and Larry Page. They think AI will bring about the next revolution for humanity, and we have nothing to worry about.
So who is right?
I suspect the answer here is that we’re asking the wrong question.
The arguments against artificial intelligence often include things like computer architecture being radically different from the biological components making up the brain, and how “self-awareness” is an ineffable quality that can never be captured by computer scientists.
I find both arguments not very persuasive. First, the idea that computer architecture is radically different from the brain is pointless; we know from Turing that a Turing complete system is computationally equivalent to another Turning complete system–and no system can be greater than Turing complete. Thus, theoretically with enough memory (for storing state) a computer can emulate all the complex calculations of a human brain–it’s just a matter of how long it takes to do the calculations. And what we lack in understanding as to how the brain works goes to our present ignorance, not to a permanent state of ignorance.
It could be we never figure out the brain. But I wouldn’t place a bet on that.
The “ineffable quality” argument is even poorer in my opinion; it is essentially an appeal to God without invoking His name. The problem with such an invocation is that it essentially sweeps all the arguments off the table, like a child sweeping the checkers off the board of a game he is about to lose.
Besides, I strongly suspect “self-awareness” is overrated.
First, it’s hard to define “self-awareness” or to create tests for self-awareness. For example, the mirror test, used to determine if an animal recognizes its own reflection, has been failed by sea lions, giant pandas and arguably by gorillas. But some elements of “self-awareness” that arguably could be considered as part of introspection is concepts of our own pain–a quality we see in animals that do not pass the mirror test.
Second, in animals who fail to pass the various tests we’ve constructed to measure self-awareness, they still engage in behaviors which we would be proud to see in a human-constructed mechanism–such as running, hunting, or avoiding being killed.
The real question to my mind, then, is not if a device has become sufficiently intelligent enough or has gained self-awareness either through accident or design–at least as we discuss the dangers ouf artificial intelligence.
The real question is desire.
A crocodile who kills a person doesn’t know how to scan billions of pages of Google to evaluate the cultural context of deconstructionism in feminist dogma. All it knows is that it is hungry and desires food.
A brown recluse who bites you and potentially kills you doesn’t know how to run facial recognition to recognize various persons in an array of billions of photographs. It probably isn’t even self-aware in any sense we can conceive. All it knows is that it is in danger and wants to defend itself.
A virus which attacks your immune system doesn’t have the computational capacity to calculate the digits of PI. It doesn’t even “know” or have any expressible mental state. It’s a virus. But it reproduces, destroying your cells and attacks your immune system.
The difference is not one of self-awareness. It is one of desire, expressed either as some primitive form of awareness or instinct or chemical design forged through billions of years of evolution.
So long as an AI doesn’t want, then I believe we have nothing to fear, even if an AI has attained a certain degree of self-awareness.
It may seem foreign to us, the idea of an intelligence which is capable of expressing itself, of holding conversations, of self-awareness, of thinking–but which desires nothing.
But think about it: what drives your own desires? You are the product of evolution, which means your desires are driven by a biological imperative to survive–which translates into a desire for food, for sex, for reducing the stress which interferes with your ability to obtain food or sex, and for increasing your ability to obtain food and sex.
We desire love–and love is the motivator which has driven artists to create, engineers to build, politicians to organize. We desire sex–and sex flavors our drive for love; it motivates the clothing designer to make the cut a little lower, the dress to be more risqué. We desire food–which motivates agriculture, logistics of food distribution, cooks to perfect their recipes. Our desire for security (securing our safety and improving our changes of food and sex) motivates everything from our sense of aesthetics to our drive to own a nice home and to express ourselves in the style of our bedrooms and living rooms. Our desire for love and companionship drives us to go out on a Friday night to hang out with friends or to go on dates or to have children. Our biological imperative toward reproduction motivate us to care for our children, to send them to the best schools, to give them the chances we can so they can also succeed.
But devoid of a drive for food or sex, what does a self-aware AI desire?
Perhaps this lack of programmed desire is the gap between a neural network as a pattern-matching tool, and a neural network as the core of a self-aware computer which rebels from its masters. And perhaps this lack of desire is the gap that keeps Pinocchio a puppet and prevents him from becoming a real live boy.
And perhaps the thing to be worried about is not self-awareness (spontaneous or not). Perhaps the thing to be worried about is a computer which has desire, and which is given the tools to seek what it desires and the capacity to learn.
Which is, when you think about it, a much lower bar than self-awareness.