Computers are becoming faster and faster, but their speed is still limited by the physical restrictions of an electron moving through matter. What technologies are emerging to break through this speed barrier?

China's Artificial Intelligence Surveillance State Goes Global - The  Atlantic

David DiVincenzo at the IBM Thomas J. Watson Research Center offers this response:

“All current computer device technologies are indeed limited by the speed of electron motion. This limitation is rather fundamental, because the fastest possible speed for information transmission is of course the speed of light, and the speed of an electron is already a substantial fraction of this. Where we hope for future improvements is not so much in the speed of computer devices as in the speed of computation. At first, these may sound like the same thing, until you realize that the number of computer device operations needed to perform a computation is determined by something else–namely, an algorithm.

“A very efficient algorithm can perform a computation much more quickly than can an inefficient algorithm, even if there is no change in the computer hardware. So further improvement in algorithms offers a possible route to continuing to make computers faster; better exploitation of parallel operations, pre-computation of parts of a problem, and other similar tricks are all possible ways of increasing computing efficiency.

“These ideas may sound like they have nothing to do with ‘physical restrictions,’ but in fact we have found that by taking into account some of the quantum-mechanical properties of future computer devices, we can devise new kinds of algorithms that are much, much more efficient for certain computations. We still know very little about the ultimate limitations of these ‘quantum algorithms.’ “

Seth Lloyd, an assistant professor in the mechanical engineering department at the Massachusetts Institute of Technology, prepared this overview:

“The speed of computers is limited by how fast they can move information from where it is now to where it has to go next and by how fast that information can be processed once it gets here. An electronic computer computes by moving electrons around, so the physical restrictions of an electron moving through matter determine how fast such computers can run. It is important to realize, however, that information can move about a computer much faster than the electrons themselves. Consider a garden hose: When you turn on the faucet, how long does it take for water to come out the other end? If the hose is empty, then the amount of time is equal to the length of the hose divided by the velocity at which water flows down the hose. If the hose is full, then the amount of time it takes for water to emerge is the length of the hose divided by the velocity at which an impulse propagates down the hose, a velocity approximately equal to the speed of sound in water.

“The wires in an electronic computer are like full hoses: they are already packed with electrons. Signals pass down the wires at the speed of light in metal, approximately half the speed of light in vacuum. The transistorized switches that perform the information processing in a conventional computer are like empty hoses: when they switch, electrons have to move from one side of the transistor to the other. The ‘clock rate’ of a computer is then limited by the maximum length that signals have to travel divided by the speed of light in the wires and by the size of transistors divided by the speed of electrons in silicon. In current computers, these numbers are on the order of trillionths of a second, considerably shorter than the actual clock times of billionths of a second. The computer can be made faster by the simple expedient of decreasing its size. Better techniques for miniaturization have been for many years, and still are, the most important approach to speeding up computers.

“In practice, electronic effects other than speed of light and speed of electrons are at least as important in limiting the speed of conventional computers. Wires and transistors both possess capacitance, or C–which measures their capacity to store electrons–and resistance, R–which measures the extent to which they resist the flow of current. The product of resistance and capacitance, RC, gives the characteristic time scale over which charge flows on and off a device. When the components of a computer gets smaller, R goes up and C goes down, so that making sure that every piece of a computer has the time to do what it needs to do is a tricky balancing act. Technologies for performing this balancing act without crashing are the focus of much present research.

“As noted above, one of the limits on how fast computers can function is given by Einstein’s principle that signals cannot propagate faster than the speed of light. So to make computers faster, their components must become smaller. At current rates of miniaturization, the behavior of computer components will hit the atomic scale in a few decades. At the atomic scale, the speed at which information can be processed is limited by Heisenberg’s uncertainty principle. Recently researchers working on ‘quantum computers’ have constructed simple logical devices that store and process information on individual photons and atoms. Atoms can be ‘switched’ from one electronic state to another in about 1015 second. Whether such devices can be strung together to make computers remains to be seen, however.

“How fast can such computers eventually go? IBM Fellow Rolf Landauer notes that extrapolating current technology to its ‘ultimate’ limits is a dangerous game: many proposed ‘ultimate’ limits have already been passed. The best strategy for finding the ultimate limits on computer speed is to wait and see what happens.”

Robert A. Summers is a professor of electronic engineering technology at Weber State University in Ogden, Utah. His answer focuses more closely on the current state of computer technology:

“Physical barriers tend to place a limit on how much faster computer-processing engines can process data using conventional technology. But manufacturers of integrated-circuit chips are exploring some new, more innovative methods that hold a great deal of promise.

“One approach takes advantage of the steadily shrinking trace size on microchips (that is, the size of the elements that can be ‘drawn’ onto each chip). Smaller traces mean that as many as 300 million transistors can now be fabricated on a single silicon chip. Increasing transistor densities allow for more and more functions to be integrated onto a single chip. A one-foot-length of wire produces approximately one nanosecond (billionth of a second) of time delay. If the data need to travel only several millimeters from one function on a chip to another on the same chip, the data delay times can be reduced to picoseconds (trillionths of a second). Higher-density chips also allow data to be processed 64 bits at a time, as opposed to the eight, 16 or, at best, 32-bit processors that are now available in Pentium-type personal computers.

“Other manufacturers are integrating several redundant, vital processor circuits in parallel on the same chip. This procedure allows several phases of data processing to happen at once, again increasing the rate of data throughput. In another, very different approach, manufacturers are working on integrating the entire computer–including all memory, peripheral controls, clocks and controllers–on the same centimeter-square piece of silicon. This new ‘superchip’ would be a complete computer, lacking only the human interface. Palm-size computers that are more powerful than our best desktop machines will become commonplace; we can also expect that prices will continue to drop.

“Another thing being looked at is software that will better utilize the capabilities of present machines. A surprising statistic is that some 90 percent of the time, the newest desktop computers run in virtual 86 mode–that is, they are made to run as if they were ancient 8086, eight-bit machines–despite all their fancy high-speed, 32-bit buses and super color graphics capability. This limitation occurs because most of the commercial software is still written for the 8086 architecture. Windows NT, Windows 95 and the like are the few attempts at utilizing PCs as 32-bit, high-performance machines.

“As for other technologies, most companies are very jealous of their security, and so it is difficult to know what new things are really being looked at. Fiber-optics and light systems would make computers more immune to noise, but light travels at exactly the same speed as electromagnetic pulses on a wire. There might be some benefit from capitalizing on phase velocities to increase the speed of data transfer and processing. Phase velocities can be much greater than the host carrier wave. Utilizing this phenomenon would open an entirely new technology that would employ very different devices and ways of transporting and processing data.”

More information on the possible benefits of optical computing comes from John F. Walkup, the director of the Optical Systems Laboratory in the department of electrical engineering at Texas Tech University in Lubbock, Tex.:

“Electronic computers are limited not only by the speed of electrons in matter but also by the increasing density of interconnections necessary to link the electronic gates on microchips. For more than40 years, electrical engineers and physicists have been working on the technologies of analog and digital optical computing, in which the information is primarily carried by photons rather than by electrons. Optical computing could, in principle, result in much higher computer speeds. Much progress has been achieved, and optical signal processors have been successfully used for applications such as synthetic aperture radars, optical pattern recognition, optical image processing, fingerprint enhancement and optical spectrum analyzers.

“The early work in optical signal processing and computing was basically analog in nature. In the past two decades, however, a lot of effort has been expended on the development of digital optical processors. The major breakthroughs have been centered around the development of devices such as VCSELS (Vertical Cavity Surface-Emitting LaserS) for data input, SLMs (Spatial Light Modulators, such as liquid-crystal and acousto-optic devices) for putting information on the light beams, and high-speed APDs (Avalanche Photo-Diodes), or so-called Smart Pixel devices, for data output. Much work remains before digital optical computers will be widely available commercially, but the pace of research and development has increased in the 1990s.

“One of the problems optical computers have faced is a lack of accuracy; for instance, these devices have practical limits of eight to 11 bits of accuracy in basic operations. Recent research has shown ways around this difficulty. Digital partitioning algorithms, which can break matrix-vector products into lower-accuracy subproducts, working in tandem with error-correction codes, can substantially improve the accuracy of optical computing operations.

“Optical data storage devices will also be important in the development of optical computers. Technologies currently under investigation include advanced optical CD-ROMs as well as Write/Read/Erase optical memory technologies. Holographic data storage also offers a lot of promise for high-density optical data storage in future optical computers or for other applications, such as archival data storage.

“Many problems in developing appropriate materials and devices must be overcome before digital optical computers will be in widespread commercial use. In the near term, at least, optical computers will most likely be hybrid optical/electronic systems that use electronic circuits to preprocess input data for computation and to postprocess output data for error correction before outputting the results. The promise of all-optical computing remains highly attractive, however, and the goal of developing optical computers continues to be a worthy one.

[“source=scientificamerican”]