Kurzweil recognizes the need to understand human intelligence before accurately rebuilding it in a machine, but his solution, reverse-engineering a brain, leaps across the fields of neuroscience, psychology and philosophy. It assumes too much — mainly that building a brain is the same thing as building a mind.
These two terms, “brain” and “mind,” are not interchangeable. It’s feasible that we can re-create the brain; it’s an infinitely complex structure, but it’s still a physical thing that can, eventually, be fully mapped, dissected and re-formed. Just this month, IBM announced it had created a working, artificial neuron capable of reliably recognizing patterns in a noisy data landscape while behaving unpredictably — specifically what a natural neuron should do. Creating a neuron is light-years away from rebuilding an entire human brain, but it’s a piece of the puzzle.
However, it’s still not a mind. Even if scientists develop the technology to create an artificial brain, there is no evidence that this process will automatically generate a mind. There’s no guarantee that this machine will suddenly be conscious. How could there be, when we don’t understand the nature of consciousness?
Even if scientists develop the technology to create an artificial brain, there is no evidence that this process will automatically generate a mind.
Consider just one aspect of the mind, consciousness and intelligence: creativity. On its own, creativity is a varied and murky thing for each individual. For one person, the creative process involves spending weeks isolated in a remote cabin; for another, it takes three glasses of whiskey; for still another, creativity manifests in unpredictable flashes of inspiration that last minutes or months at a time. Creativity means intense focus for some and long bouts of procrastination for others.
So tell me: Will AI machines procrastinate?
Perhaps not. The singularity suggests that, eventually, AI will be billions of times more powerful than human intelligence. This means AI will divest itself of messy things like procrastination, mild alcoholism and introversion in order to complete tasks similar to those accomplished by their human counterparts. There’s little doubt that software will one day be able to output beautiful, creative things with minimal (or zero) human input. Beautiful things, but not necessarily better. Creative, but not necessarily conscious.
Singularities
Kurzweil, Musk and others aren’t predicting the existence of Tay the Twitter bot; they’re telling the world that we will, within the next 20 years, copy the human brain, trap it inside an artificial casing and therefore re-create the human mind. No, we’ll create something even better: a mind — whatever that is — that doesn’t need to procrastinate in order to be massively creative. A mind that may or may not be conscious — whatever that means.
The technological singularity may be approaching, but our understanding of psychology, neuroscience and philosophy is far more nebulous, and all of these fields must work in harmony in order for the singularity’s promises to be fulfilled. Scientists have made vast advances in technological fields in recent decades, and computers are growing stronger by the year, but a more powerful computer does not equate to a breakthrough in philosophical understanding. More accurately mapping the brain does not mean we understand the mind.
The technological singularity has a longer tail than the law of accelerating returns suggests. Nothing on earth operates in a vacuum, and before we can create AI machines capable of supporting human intelligence, we need to understand what we’re attempting to imitate. Not ethically or morally, but technically. Before we can even think of re-creating the human brain, we need to unlock the secrets of the human mind.