Moore’s Law Definition, History, & Impact for the Future – Digital Leadership

4 min read

Transformation

single post blog featured image
Social Share

The waves of economic development are driven by invention, themselves typically driven by technological breakthroughs.
One of the dominating features of the economy over the last fifty years has been Moore’s law, which has led to exponential growth in computing power and exponential drops in its costs.

The UNITE 4 Waves of Industrial Revolution
The UNITE 4 Waves of Industrial Revolution
Designed by: Susanne M.Zaninelli & Stefan F.Dieffenbacher

The 4 Waves Of Industrial Revolution offer a clear visual representation of the entire history of human civilization with an emphasis on cultural, technological and organizational development. One of these remarkable disruptive innovation examples is known today as Moore’s Law.

What is Moore’s law?

In 1965, Gordon E. Moore, the co-founder of Intel observed that the number of transistors on an integrated circuit can be doubled about every two years.  Intel made this observation the main goal of its development: Production and research on-chip architectures were tailored to double computing performance in a two-year cycle while keeping costs within a reasonable range.

By reducing the size of transistors (down to 65nm) and optimizing their usage, Intel was able to follow that goal with single-core CPUs up to the year 2005.  As a solution, Intel introduced the first multi-core CPU: By doubling the number of processing units on a single CPU, the performance was also doubled.
While this continued Moore’s law, multi-core CPUs lead to a hard shift in the way applications were developed. Instead of writing a single serial application, Programmers had to start to introduce parallelism to be able to use multiple cores simultaneously.

Multi-core CPUs introduced several constraints to Moore’s law: Not every algorithm can fully exploit multicore CPUs (Amdahls’ Law). So only well parallelizable algorithms can benefit fully from multiple processing units.  For single-core CPUs, the reduction of transistor sizes also reduced the required energy for each transistor (Dennard Scaling), for multi-core CPUs this was no longer the case.  Independently a third constraint became important, the memory-performance gap: While the performance of CPUs grows exponentially, their memory bandwidth only grows linearly. Problems that require a high number of memory transitions cannot use the whole available performance on a CPU.

The Only Book On Innovation You’ll Ever Need

+FREE access to 50+ complimentary download packages covering the details with plenty of helpful background information

Today’s reason for the continuation of Moore’s law

Graphic Processing Units (GPUs) are today’s reason for the continuation of Moore’s law. GPUs, while first introduced for the fast computation of graphics in computer games, allow massive parallel processing of problems on thousands of cores (Current Nvidia GPUs have up to 6000 cores). GPU producers can reach this high number, by reducing the computational flexibility of single cores. GPUs are often seen as accelerators: For complex tasks, developers have to rely on CPUs, whenever an algorithm requires a high number of repetitive tasks on a regular dataset, the CPUs assign work to a GPU.

We see that algorithms, that have the highest benefit from this type of hybrid hardware, are the ones that show the most impressive results: Neural networks in the context of machine learning and problems used as proof-of-work in blockchains.

In parallel the industry started to develop specialized chips for designated fields of application: Tensor Processing Units (TPUs) are developed by Google or Nvidia for the sole purpose of accelerating machine learning algorithms. Apple follows a hybrid approach in the current chip architecture used in iPhones.  Energy-efficient cores are used whenever the smartphone is idle and only track data, when in use high-performance cores are activated. In recent supercomputers clusters, hundred-thousands high performant and energy-consuming processors run on parallel algorithms to reach performance in the exaflop range.

Its impact on the future

The developments we describe, are the ones we will most likely see in the future.

While transistor sizes are being reduced in slower cycles (the 2nm barrier will be reached in 2025), Moore’s law has changed from a general development of chips to a matter of performance improvement through specialization.

As emerging technology quantum computers (QC), will also be applied to a specific set of problems. While quantum computers today are mostly an abstract concept (all attempts for an actual quantum computer focus on finding a physically stable implementation.), physics and theoretical computer science already allow as a peek into the feature:

In quantum physics, states become a matter of probability, which allows quantum bits to represent infinitely many states at once (superposition principle), instead of two in a classical bit. With suiting algorithms (Shor’s algorithm, Grover’s algorithm), quantum bits can be changed to represent the most probable solution to a problem. If the result of quantum computation is the actual solution to a problem is uncertain, it will always have to be verified by a classical computer.

With these properties, quantum computers are perfectly suited for problems where classical computers can only perform as well as guessing the solution. We will see that quantum computers change encryption as we know it, as today an encryption key is safe if it can only be guessed. On the other hand, for search problems as they occur in databases or in machine learning we will see a huge performance improvement.


Social Share
loading gif
";