The Singularity may not be as close as you think
Ray Kurzweil didn’t invent the concept of the technological singularity, but his 2005 book The Singularity Is Near is the best known use of the term, and the obvious inspiration for the title of this lengthy blog post. The book makes many arguments and predictions, but the most famous prediction was that by the year 2045 artificial machine intelligence (strong AI) will exceed the combined intelligence of all the world’s human brains.
The idea of more-than-human strong machine intelligence didn’t start with Kurzweil. As merely one example, Robert Heinlein’s The Moon is a Harsh Mistress (1966) has a sentient computer nicknamed Mike, and even describes how it achieves consciousness: “Human brain has around ten-to-the-tenth neurons…Mike had better than one and a half times that number of neuristors. And woke up.”
The analogy made a lot of sense. The things that we believed were solely responsible for brain function in human brains seemed to work an awful lot like the on/off switching roles that transistors played in computer brains. Maybe human brains were a bit more complex, but at some point the machines would catch us up, and then pass us.
Kurzweil’s argument is considerably more complex than Heinlein’s, as would be expected 40 years later. He argues that the human brain is capable of around “1016 calculations per second and 1013 bits of memory” and that better understanding of the brain (mainly through better imaging) will allow us to combine Moore’s Law and other new technologies to create strong machine intelligence. Concepts like ‘calculations per second’ (more on this later) have led directly to charts like this from Kurzweil’s book:
Needless to say, this kind of prediction is perfect fodder for sensational media stories. We’ve all grown up on Frankenstein, HAL 9000 and Skynet, and the headline “By 2045 ‘The Top Species Will No Longer Be Humans,’ And That Could Be A Problem” was just begging to be written.
But there’s a problem: although there are those who talk about the Singularity and strong artificial intelligence occurring 30 years from now, there are another bunch of very smart people who say it is unlikely to be anywhere that soon. And the reason they think so isn’t so much that they argue that current machines aren’t that smart, it is that we don’t know enough about the human brain.
Jaron Lanier (who — like Kurzweil — is NOT a true AI researcher, merely someone who writes well about the topic) said this week “We don’t yet understand how brains work, so we can’t build one.”
That’s a really important point. The Wright brothers spent hours observing soaring birds at the Pinnacles in Ohio, saw that they twisted their wing tips to steer, and incorporated that into their wing warping theory of 1899. They were able to create artificial flight because they had a model of natural flight.
Decades ago, brain scientists thought they had an equally clear model of how human brains worked: neurons were composed of dendrites and axons, and the gaps between neurons were synapses, and electrical signals propagated along the neuron just like messages along a wire. They still didn’t have a clue where consciousness came from, but they thought they had a good model of the brain.
Since then, scientists keep discovering that the reality is far more complex, and there are all kinds of activation pathways, neurotransmitters, long term potentiation, Glial cells, plasticity, and (although consensus is against this) perhaps even quantum effects. I’m not a brain researcher, but I do follow the literature. And we don’t appear to know enough to allow AI researchers to mimic or simulate all these various details and processes in machine intelligences.
[This bit is only for those who are really interested in brain function. Kurzweil’s assumption was that the human brain is capable of around 1016 calculations per second, based on estimates that the adult human brain has around 1011 (100 billion) neurons and 1014 (100 trillion) synapses. As of 2005, that seemed like a reasonable way of looking at the subject. However, since then scientists have learned that Glial cells may be much more important that we thought only a decade ago. ‘Glia’ is Greek for glue, and historically these cells were thought to kind of hold the brain together, but not play a direct role in cognition. This now appears to be untrue: Glial cells can make their own synapses, they make up a MUCH greater percentage of brain tissue in more intelligent animals (a linear relationship, in fact) and there are about 10x as many of them in the human brain as neuronal cells. Kurzweil’s assumptions about the number of calculations per second MAY be accurate. Or they may be anywhere from hundreds to hundreds of thousands times too low. Perhaps most importantly, the very idea of trying to compare the way computers ‘think’ (FLoating -point Operations Per Second, or FLOPS, which are digital and can be summed) with how the human brain works (which is an analog, stochastic process) may not be a good way of thinking about thinking at all.]
If you do a survey of strong AI researchers, rather than popularisers, you still get a median value of around 2040. But the tricky bit is the range of opinions: Kurzweil and his group are clustered around 2030-2045…but there is another large group that thinks it may be a 100 years off. To quote the guy who did the meta-analysis of all the informed views “…my current 80% estimate is something like five to 100 years.” That’s a range you could drive a truck through.
The more pessimistic group points out that although we now have computers that can beat world champions at chess or Jeopardy!, and even fool a percentage of people into thinking they are talking to a real person, these computers are almost certainly not doing that in any way that is similar to how the human brain works. The technologies that enable things like Watson and Deep Blue are weak AI, and are potentially useful, but they should not necessarily be considered stepping stones on the path to strong AI.
Based on my experience following this field since the mid-1970s, I am now leaning (sadly) to the view that the pessimists will be correct. Don’t get me wrong: at any point there could be a breakthrough in our understanding of the brain, or in new technologies that are better able to mimic the human brain, or both. And the Singularity could occur in the next 12 months. But that’s not PROBABLE, and from a probability perspective I would be surprised to see the Singularity before my 100th birthday, 50 years from now in 2064. And I would not be surprised if it still hadn’t happened in 2114.
So who cares about the Singularity? If it is likely to not happen until next century, then any effort spent thinking about it now is a waste of time, right?
In the early 1960s, hot on the heels of the Cuban Missile Crisis and Mutually Assured Destruction (MAD) nuclear war scenarios, American musical satirist Tom Lehrer wrote a song that was what he referred to as ‘pre-nostalgia’. Called “So Long, Mom (A Song for World War III)”, he explained his rationale:
“It occurred to me that if any songs are going to come out of World War III…we better start writing them now.”
In the same way, the time to start thinking about strong AI, the Singularity, and related topics is BEFORE they occur, not after.
This will matter for public policy makers, those in the TMT industry, and anyone whose business might be affected by an equal-to-or-greater-than-human machine intelligence. Which is more or less everyone!
Next, even if the Singularity (with a capital S) doesn’t happen for 100 years, the exercise of thinking about what kinds of effects stronger artificial intelligence will have on business models and society is a wonderful thought experiment, and one that leads to useful strategy discussions, even over the relatively short term.
I would like to once again thank my friend Brian Piccioni, who has discussed the topic of strong and weak AI with me over many lunches and coffees in the past ten years, and who briefly reviewed this article. All errors and omissions are mine, of course.