Gordon Moore observed that the number of transistors on a microprocessor would double every 18 months in 1965. This forecasted that computing hardware would become commoditized, getting cheaper, faster, and smaller at an exponential rate. Moore’s Law has held up for the past 50 years, thereby becoming not only predictive but prescriptive of the exponentially accelerating rate of technological progress.
Moore’s Law was never a law of nature but an economic principle. When Moore’s Law became widely accepted in 1991, the semiconductor industry doubled down on supply chains, R&D, and industry maps in an effort to keep pace with Gordon Moore’s observation. This is industrial capitalism, not a law of nature like Dennard Scaling: a law that states as transistors become smaller, they use less power. However, Dennard Scaling broke down in 2006 with CPU speeds have not meaningfully since then.
With persistent micro-architecture optimizations by the semiconductor industry, Moore’s Law has continued with improvements in parallel systems and multi-core processors. Many maintain that Moore’s Law will continue for the next few decades through specialized processors. But the brute force acceleration to keep pace with Moore’s Law, accompanied by monumental R&D expenditures, produces diminishing returns.
Moore’s Law will also be impossible to continue as it approaches its physical limits. Amdahl’s Law predicts the maximum theoretical speedup with multiprocessors and parallelized computing. Intel has predicted that silicon transistors can only shrink for another five years.
While Moore’s Law has been the bedrock of technological innovation, it has bred complacency in revolutionary advances in computing. Moore’s Law created a dominant model for computing based on the general-purpose architecture first proposed by John von Neumann in 1945. The rewarding processes and economics of mass production for the semiconductor industry created homogeneity in computing design.
However, new computing architecture has arisen as a result of new computing needs. While traditional software requires general microprocessors, the rise of machine learning requires specialized computing. This has been a force function for a more heterogeneous computing environment. Getting rid of several design features dating from the 1940s could unlock huge efficiency gains.
Google built Tensor Processing Unites (TPU) tailored to the demanding parallelized computations required by general deep learning. The most exciting part of Google’s TPU is the agile development approach taken. The processor was built in 15 months with 30 engineers. Google has also been able to release a new version of the TPU since 2016. Its simple architectural design has been borrowed from ideas in data flow computing and systolic arrays in the 1980s, given less attention to because of Moore’s Law.
Quantum computing has become being driven by the same forces of Moore’s Law as a result of the diminishing returns and economic pressures applied by continued investment in Moore’s Law. The promise of quantum computing is to rely on quantum mechanics to solve computation problems too complex for current hardware.
Quantum computing replaces bits (digital binary) with quantum bits (continuous variables). In a quantum system with superposition, qubits can exist as 0 and 1 at the same time. These systems enable factorization, simulation of complex 2^n systems, discrete optimization, and more. Traditional computing struggle with large-scale factoring (foundation for most encryption standards like RSA), but quantum computing does not (Peter Shor’s algorithm). This will have a transformational impact on AI, modeling molecular interactions, security, and more.
I’m often skeptical about making predictions, but it’s likely the impact of quantum computing will be transformational, with already clear impact in artificial intelligence, understanding molecular interaction, and cybersecurity.
The future of computing is exciting after the end of Moore’s Law. The diversity in the computing environment has already led to changes in the design space of innovation. New software means that traditional software as the code will be supplanted by software as just a bunch of machine learning models.
Gradient descent can write code better than you. I’m sorry.
— Andrej Karpathy (@karpathy) August 4, 2017
To keep pace with the exponential trajectory of technological progress that the world has grown accustomed to, computing needs to under radical, not incremental innovations. I remain only interested in the zero to one opportunity, skeptical on some areas of computing innovation like quantum annealers. However, I’m learning about these new technologies and excited to back the venture-venture-specific opportunities that come my way.