Now that we have a solid understanding of the exponential function, we can begin to look at things from a more informed perspective. You may have heard of Moore’s Law, which states that the number of transistors that can be placed on an integrated circuit doubles approximately every two years. This effectively means that computer power doubles every 24 months or so. When Gordon E. Moore, co-founder of Intel Corporation, the world’s largest semiconductor chip manufacturer, described this trend in his famous 1965 paper,1 people were very sceptical. He noticed that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965, and predicted that the trend would continue “for at least ten years.” Many did not believe him. They said it was an inaccurate prediction. We could not expect it to grow any further, due to various technical problems. Those sceptics were wrong. In fact, it has been doubling steadily for more than 50 years, without any sign of stopping. But Moore’s Law is not the whole story. The exponential expansion of technology has been growing remarkably smoothly for a much longer time, and integrated circuits are just a tiny fraction of the whole spectrum of change that pervades technological advancement.
Ray Kurzweil notes2 that Moore’s Law of Integrated Circuits was not the first, but rather the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time), from the mechanical calculating devices used in the 1890 US Census, to Turing’s relay-based Bombe machine that cracked the Nazi enigma code, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer which Kurzweil used to dictate the very essay that described this phenomenon, in 2001.
To get an idea of what exponential growth means, look at the following graph, which represents the difference between a linear trend and an exponential one.
Figure 1.1: The difference between a Linear and an Exponential curve. Courtesy of Ray Kurzweil.
As you can see, the exponential trend starts to really take off where the ‘Knee of the Curve’ begins. Before that, things do not seem to change significantly. It is just like the story of the chess board and the king. In the first few days nothing notable happens, but as soon as the curve kicks in, something dramatic happens and things go out of control.
If we were to plot the same graph on a logarithmic scale, the line representing the exponential trend – which soon got out of control in the first graph – would look much more manageable. On the y-axis (vertical), representing quantity, instead of moving 20–40–60, we would move 10–100–1,000. So, a curve that would normally go right off the ceiling on a linear graph will look like a straight line on a logarithmic plot. You will understand why we utilise logarithms when talking about exponentials – there simply is not enough space to show the curve.
What is even more remarkable is that, when Kurzweil plotted the world’s fastest calculator’s on a graph since 1900, he noticed something quite surprising. Remember that a straight line on a logarithmic graph means exponential growth? If you thought exponential growth was fast, you have not seen anything yet. Take a look at this graph.
Figure 1.2: The Exponential Growth of computing power over the last 110 years. Courtesy of Ray Kurzweil.
This plot is logarithmic. You can see the y-axis having the number 10 growing at five orders of magnitude after each step (that is a 100,000 fold increase every time!), but the curve is not a straight line. Instead, what you see is an upward trend. What this means is that there is another exponential curve. In other words, there is exponential growth in the rate of exponential growth. Considering what we have just learned about exponential growth, I would say that that is pretty remarkable. Computer speed (per unit cost) doubled every three years between 1910 and 1950, then doubled every two years between 1950 and 1966, and is now doubling every year. Computer power is not simply increasing. It is increasing faster and faster, every year.
According to the available evidence, we can infer that this trend will continue for the foreseeable future, or at least another 30 years. Eventually, it will hit physical limits imposed by the laws of nature, and its increase will have to slow down. Some suggest that we may be able to circumvent that problem, once the singularity is reached.
Technological Singularity refers to the time when the speed of technological change is so fast that we are unable to predict what will happen. At that moment, computer intelligence will exceed that of human, and we will not even be able to understand what changes are happening. The term was first coined by science fiction writer Vernon Vinge and subsequently popularised by many authors, predominantly Ray Kurzweil with his books The Age of Spiritual Machines and The Singularity is Near. This idea, however, is highly speculative, and it is far beyond the purpose of this book to examine its feasibility. Suffice to say that in order for machines to replace most human jobs, the singularity is not a necessary requirement, as we will see in the next chapters. Whether you buy into the singularity argument or not does not matter. The data is clear, facts are facts, and we only have to look a few years into the future to reach conclusions that are alarming enough.
The Turing Test is a thought experiment proposed in 1950 by the brilliant English mathematician and father of computers, Alan Turing. Imagine you enter a room where a computer sits on top of a desk. You notice there is a chat window and two conversations are open. As you begin to type messages down, you are told you are in fact talking to one person and one machine. You can take as much time as you want to find out which is which. If you are not able to tell the difference between them, the machine is said to have passed the test.
There are many variations of the same experiment, you could have more interlocutors, and they could all be machines, or they could all be humans, and you might be tricked into thinking otherwise. Whatever the flavour, the main idea is clear: you conduct conversations through natural language to determine if you are communicating with a human or a computer. A machine able to pass the Turning test is said to have achieved human-level intelligence, or at least perceived intelligence (whether we consider that to be true intelligence or not is irrelevant for the purpose of the argument). Some people call this Strong Artificial Intelligence (Strong AI), and many see Strong AI as an unachievable myth, because the brain is mysterious, and so much more than the sum of its individual components. They claim that the brain operates using unknown, possibly unintelligible quantum mechanical processes, and any effort to reach or even surpass it using mechanical machines is pure fantasy. Others claim that the brain is just a biological machine, not much different from any other machine, and that it is merely a matter of time before we can surpass it using our artificial creations. This is certainly a fascinating topic, one that would require a thorough examination. Perhaps I will explore it on another book. For now, let us concentrate on the present, on what we know for sure, and on the upcoming future. As we will see, there is no need for machines to achieve Strong AI in order to change the nature of the economy, employment, and our lives, forever.
We will start by looking at what intelligence is, how it can be useful, and if machines have become intelligent, perhaps even more so than us.
Your Federico Pistono
2 The Law of Accelerating Returns March 7, Ray Kurzweil, 2001.