Over the years, I have followed with interest the predictions of doom made about AI. I have felt my own emotional responses shake and shudder as new possibilities emerged. I felt deep existential fears coming up and invade my being. Are we really going to be enslaved by robots?
The Terminator franchise, Ex Machina, Surrogates and similar movies had fuelled my sense of ongoing panic. Ray Kurzweil’s notion of an approaching “Singularity,” or Nick Bostrum’s book “Superintelligence,” gave my panic a firm, intellectually-grounded basis.
The essential concept was that it was only a matter of time before AI became more intelligent than humans. Just a few more petaflops of computing power; just a few more years living under the rule of Moore’s Law - and then all bets would be off.
We would be unable to predict what the new hyper-intelligent AI would get up to. It might well turn us all into paperclips (google Paperclip Maximiser). It might recursively optimise its operating system five times in under a minute and be utterly unrecognisable from how it was 60 seconds before. It might make us all slaves of its own, perhaps idealistically formulated program.
On what was all this not unreasonable panic based?
Well, for decades it had seemed that science was getting close to unravelling one of the two great mysteries that had beset it since before the time of Isaac Newton.
Not the one about what the universe was actually made of. But the other one.
How does the brain produce consciousness? How do all these neurons and glia create the sense of blue that manifests when I look at the sky? How does it make water feel wet?
We were getting closer. Guys like Stanislas Dehaene were tracking down waves of cerebral activity that could be measured just as we became aware of something. We were convinced we were going to get there. Soon we would know just how the brain produced consciousness and then everything would make sense. Rationality would have triumphed.
And, of course, the door would then be open for AI to improve upon the human brain. We could have “conscious robots” that far excelled our own capabilities and possibilities.
The notion that the brain was the source of consciousness was thus the dominant belief in science from the days of Alan Turing or Isaac Asimov. This notion fuelled the development of AI, especially in its early days.
But now there’s a problem.
Dehaene’s breakthrough is now well over a decade old. And we’ve got no further. More and more scientists are starting to shake their heads and get the old drawing board back out. Maybe Bishop Berkeley was right. Maybe Panpsychism was right. We don’t know.
And, as the likelihood of the brain being the source of consciousness decreases towards zero, so does the danger associated with AI.
For we may be able to reproduce and improve upon many of the processing functions of our brain, especially those associated with thinking and our “higher mind.” But what about all those important “emergent” effects that simply seem to arise from the heaps of lower order processes of the brain?
What about qualia - the taste of a strawberry or its characteristic colour?
How do processing, emergence and consciousness tie together? This question, which not long ago looked to be soon answerable is now a long way out of reach.
And, as it recedes into the background, so the notion of AI ever being much more than Siri on steroids recedes with it.
Not that that is nothing. Terminator, Ex Machina and Surrogates might well still come about. But there are huge aspects to being human that will almost certainly remain out of reach for machines.