The Coming Problem of Our iPhones Being More Intelligent Than Us

Home » Future » The Coming Problem of Our iPhones Being More Intelligent Than Us

The Coming Problem of Our iPhones Being More Intelligent Than Us

Ray Kurzweil made a startling prediction in 1999 that appears to be coming true: that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain.  He also predicted that Moore’s Law, which postulates that the processing capability of a computer doubles every 18 months, would apply for 60 years – until 2025 – giving way then to new paradigms of technological change.

Kurzweil, a renowned futurist and the director of engineering at Google, now says that the hardware needed to emulate the human brain may be ready even sooner than he predicted – in around 2020 – using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.

The implications of all this are mind-boggling.  Within seven years – about when the iPhone 11 is likely to be released – the smartphones in our pockets will be as computationally intelligent as we are. It doesn’t stop there, though.  These devices will continue to advance, exponentially, until they exceed the combined intelligence of the human race. Already, our computers have a big advantage over us: they are connected via the Internet and share information with each other billions of times faster than we can. It is hard to even imagine what becomes possible with these advances and what the implications are.

Doubts are understandable about the longevity of Moore’s Law and the practicability of these advances. There are limits, after all, to how much transistors can be shrunk: nothing can be smaller than an atom.  Even short of this physical limit, there will be many other technological hurdles. Intel acknowledges these limits but suggests that Moore’s Law can keep going for another five to 10 years.  So the silicon-based computer chips in our laptops will likely sputter their way to match the power of a human brain.

Kurzweil says Moore’s Law isn’t the be-all and end-all of computing and that the advances will continue regardless of what Intel can do with silicon. Moore’s Law itself was just one of five paradigms in computing: electromechanical, relay, vacuum tube, discrete transistor, and integrated circuits. In his (1999) “Law of Accelerating Returns,” Kurzweil explains that technology has been advancing exponentially since the advent of evolution on Earth and that computing power has been rising exponentially: from the mechanical calculating devices used in the 1890 U.S. Census, via the machines that cracked the Nazi enigma code, the CBS vacuum-tube computer, the transistor-based machines used in the first space launches, and more recently the integrated-circuit-based personal computer.

With exponentially advancing technologies, things move very slowly at first and then advance dramatically.  Each new technology advances along an S-curve — an exponential beginning, flattening out as the technology reaches its limits.  As one technology ends, the next paradigm takes over.  That is what has been happening, and why there will be new computing paradigms after Moore’s Law.

Already, there are significant advances on the horizon, such as the GPU, which uses parallel computing to create massive increases in performance, not only for graphics, but also for neural networks, which constitute the architecture of the human brain. There are 3D chips in development that can pack circuits in layers. IBM and the Defense Advanced Research Projects Agency are developing cognitive-computing chips. New materials, such as gallium arsenide, carbon nanotubes, and graphene, are showing huge promise as replacements for silicon. And then there is the most interesting — and scary — technology of all: quantum computing.

Instead of encoding information as either a zero or a one, as today’s computers do, quantum computers will use quantum bits, or qubits, whose states encode an entire range of possibilities by capitalizing on the quantum phenomena of superposition and entanglement.  Computations that would take today’s computers thousands of years will occur in minutes on these.

Add artificial intelligence to the advances in hardware, and you begin to realize why luminaries such as Elon Musk, Stephen Hawking, and Bill Gates are worried about the creation of a “super intelligence.”  Musk fears that “we are summoning the demon.”  Hawking says it “could spell the end of the human race.”  And Gates wrote: “I don’t understand why some people are not concerned.”

Kurzweil tells me he is not worried.  He believes we will create a benevolent intelligence and use it to enhance ourselves. He sees technology as a double-edged sword, just like fire, which has kept us warm but has also burned down our villages.  He believes that technology will enable us to address the problems that have long plagued human civilization – such as disease, hunger, energy, education, and clean water – and that we can use it for good.

These advances in technology are a near certainty.  The question is whether humanity will rise to the occasion and use them in a beneficial way.  We can either build a Star Trek future, in which our civilization rises to new heights, or descend into a Mad Max world. It is up to us.

By Vivek Wadhwa – SingularityHub

شارك: