The Singularity may not be as close as you think

Objects in mirror

Ray Kurzweil didn’t invent the concept of the technological singularity, but his 2005 book The Singularity Is Near is the best known use of the term, and the obvious inspiration for the title of this lengthy blog post. The book makes many arguments and predictions, but the most famous prediction was that by the year 2045 artificial machine intelligence (strong AI) will exceed the combined intelligence of all the world’s human brains.

The idea of more-than-human strong machine intelligence didn’t start with Kurzweil. As merely one example, Robert Heinlein’s The Moon is a Harsh Mistress (1966) has a sentient computer nicknamed Mike, and even describes how it achieves consciousness: “Human brain has around ten-to-the-tenth neurons…Mike had better than one and a half times that number of neuristors. And woke up.”

The analogy made a lot of sense. The things that we believed were solely responsible for brain function in human brains seemed to work an awful lot like the on/off switching roles that transistors played in computer brains. Maybe human brains were a bit more complex, but at some point the machines would catch us up, and then pass us.

Kurzweil’s argument is considerably more complex than Heinlein’s, as would be expected 40 years later. He argues that the human brain is capable of around “1016 calculations per second and 1013 bits of memory” and that better understanding of the brain (mainly through better imaging) will allow us to combine Moore’s Law and other new technologies to create strong machine intelligence. Concepts like ‘calculations per second’ (more on this later) have led directly to charts like this from Kurzweil’s book:

PPTExponentialGrowthof_Computing

Needless to say, this kind of prediction is perfect fodder for sensational media stories. We’ve all grown up on Frankenstein, HAL 9000 and Skynet, and the headline “By 2045 ‘The Top Species Will No Longer Be Humans,’ And That Could Be A Problem” was just begging to be written.

But there’s a problem: although there are those who talk about the Singularity and strong artificial intelligence occurring 30 years from now, there are another bunch of very smart people who say it is unlikely to be anywhere that soon. And the reason they think so isn’t so much that they argue that current machines aren’t that smart, it is that we don’t know enough about the human brain.

Jaron Lanier (who — like Kurzweil — is NOT a true AI researcher, merely someone who writes well about the topic) said this week “We don’t yet understand how brains work, so we can’t build one.”

That’s a really important point. The Wright brothers spent hours observing soaring birds at the Pinnacles in Ohio, saw that they twisted their wing tips to steer, and incorporated that into their wing warping theory of 1899. They were able to create artificial flight because they had a model of natural flight.

Decades ago, brain scientists thought they had an equally clear model of how human brains worked: neurons were composed of dendrites and axons, and the gaps between neurons were synapses, and electrical signals propagated along the neuron just like messages along a wire. They still didn’t have a clue where consciousness came from, but they thought they had a good model of the brain.

Since then, scientists keep discovering that the reality is far more complex, and there are all kinds of activation pathways, neurotransmitters, long term potentiation, Glial cells, plasticity, and (although consensus is against this) perhaps even quantum effects. I’m not a brain researcher, but I do follow the literature. And we don’t appear to know enough to allow AI researchers to mimic or simulate all these various details and processes in machine intelligences.

[This bit is only for those who are really interested in brain function. Kurzweil’s assumption was that the human brain is capable of around 1016 calculations per second, based on estimates that the adult human brain has around 1011 (100 billion) neurons and 1014 (100 trillion) synapses. As of 2005, that seemed like a reasonable way of looking at the subject. However, since then scientists have learned that Glial cells may be much more important that we thought only a decade ago. ‘Glia’ is Greek for glue, and historically these cells were thought to kind of hold the brain together, but not play a direct role in cognition. This now appears to be untrue: Glial cells can make their own synapses, they make up a MUCH greater percentage of brain tissue in more intelligent animals (a linear relationship, in fact) and there are about 10x as many of them in the human brain as neuronal cells. Kurzweil’s assumptions about the number of calculations per second MAY be accurate. Or they may be anywhere from hundreds to hundreds of thousands times too low. Perhaps most importantly, the very idea of trying to compare the way computers ‘think’ (FLoating -point Operations Per Second, or FLOPS, which are digital and can be summed) with how the human brain works (which is an analog, stochastic process) may not be a good way of thinking about thinking at all.]

If you do a survey of strong AI researchers, rather than popularisers, you still get a median value of around 2040. But the tricky bit is the range of opinions: Kurzweil and his group are clustered around 2030-2045…but there is another large group that thinks it may be a 100 years off. To quote the guy who did the meta-analysis of all the informed views “…my current 80% estimate is something like five to 100 years.” That’s a range you could drive a truck through.

The more pessimistic group points out that although we now have computers that can beat world champions at chess or Jeopardy!, and even fool a percentage of people into thinking they are talking to a real person, these computers are almost certainly not doing that in any way that is similar to how the human brain works. The technologies that enable things like Watson and Deep Blue are weak AI, and are potentially useful, but they should not necessarily be considered stepping stones on the path to strong AI.

watson-game-top-1

Based on my experience following this field since the mid-1970s, I am now leaning (sadly) to the view that the pessimists will be correct. Don’t get me wrong: at any point there could be a breakthrough in our understanding of the brain, or in new technologies that are better able to mimic the human brain, or both. And the Singularity could occur in the next 12 months. But that’s not PROBABLE, and from a probability perspective I would be surprised to see the Singularity before my 100th birthday, 50 years from now in 2064. And I would not be surprised if it still hadn’t happened in 2114.

So who cares about the Singularity? If it is likely to not happen until next century, then any effort spent thinking about it now is a waste of time, right?

In the early 1960s, hot on the heels of the Cuban Missile Crisis and Mutually Assured Destruction (MAD) nuclear war scenarios, American musical satirist Tom Lehrer wrote a song that was what he referred to as ‘pre-nostalgia’. Called “So Long, Mom (A Song for World War III)”, he explained his rationale:

“It occurred to me that if any songs are going to come out of World War III…we better start writing them now.”

In the same way, the time to start thinking about strong AI, the Singularity, and related topics is BEFORE they occur, not after.

This will matter for public policy makers, those in the TMT industry, and anyone whose business might be affected by an equal-to-or-greater-than-human machine intelligence. Which is more or less everyone!

Next, even if the Singularity (with a capital S) doesn’t happen for 100 years, the exercise of thinking about what kinds of effects stronger artificial intelligence will have on business models and society is a wonderful thought experiment, and one that leads to useful strategy discussions, even over the relatively short term.

I would like to once again thank my friend Brian Piccioni, who has discussed the topic of strong and weak AI with me over many lunches and coffees in the past ten years, and who briefly reviewed this article. All errors and omissions are mine, of course.

Advertisements

Tags: , , , , , ,

4 responses to “The Singularity may not be as close as you think”

  1. Kevin P says :

    How does Quantum Computing factor into this? My understanding is that it’s moving away from “1’s and 0’s” to a more fluid form of computing.

    • duncanpredicts says :

      Hi Kevin,

      That’s a really interesting question. So far, all the anticipated applications for quantum computing have been in things like cryptography, protein folding, and route planning. We KNOW that QCs are particularly well suited to those kinds of mathematical problems, and classical computers aren’t. It is possible the QCs may have some role alongside classical machines in strong machine intelligence, or may even be able to do it on their own. But there is no evidence that will be true, at least at this time.

      • senethys says :

        I think, most importantly that QC will help solving many algorithms that we can’t with todays super computers. It wouldn’t be a too big of a speculation to assume that the brain uses some interesting algorithms for/with parallel, network or any type pf processing you can think of.

  2. Gear Mentation says :

    I think we have some pretty simple observations which are encouraging: first, the human brain can be much smaller, or very deformed, and still function pretty much normally. Second, there is no difference in intelligence between people with large brains and people with small brains. Third, IQ has been increasing, while brain size has been decreasing.

    What we see from this is that it isn’t the number of connections in the brain, it’s the way the brain works: the software and the hardware work together in a certain way to make intelligence, and that’s at least as important as brain mass.

    Once computers get anywhere near the connective complexity of the human brain, it will be all about the software, and very little about the hardware. Possibly even todays supercomputers could be intelligent, with the right programming.

    So the question about AGI remains: when will we know enough to create intelligence. Possibly, we’ll know enough only after we stop talking about how many connections there are.

    However, the Singularity is not really about AGI. It’s about supertechnology, and the elimination of scarcity through robotics and nanotechnology, the elimination of death, plus supercharged expert systems like Watson counts in my book as a Singularity. And these leave much less to the imagination than AGI.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: