Evolution of CPU Processing Power | 2 - Rise of the x86

In this multi-part series, we explore the evolution of the microprocessor and its astonishing growth in processing power over the decades. In Part 2, we learn about how the x86 architecture came to dominate the PC world through the trifecta of Intel, IBM, and Microsoft.

As the 1970s progressed, CPU designs grew more robust. Faster clock speeds, larger address capacities, and more elaborate instructions sets were all being leveraged. The next major offering from Intel was the 8008.


One of the more prominent additions to the 8008 feature list was the inclusion of indirect addressing. With direct addressing, a memory location is provided to an instruction, where it then fetches the data contents of that address location. In indirect addressing, the contents of that referenced memory location is actually a pointer to another location - where the data actually is.


The 8008 also implemented mechanism known as interrupts. Interrupts allowed hardware signals and internal CPU events to pause program execution and jump to a small high priority region of code. Example of interrupt events could be a real-time clock signal, a trigger from a piece of external hardware such as a keyboard, or a change in the CPUs internal state. Even program code can trigger an interrupt. After the execution of the interrupt service code, the original program would resume. 


Nex next major Intel product was the 8080. The 8080 was the first in Intel’s product line to utilize an external bus controller. This support chip was responsible for interfacing with RAM, and other system hardware components. These communications are commonly referred to as input/output or IO. This allowed the CPU to interface with slower memory and IO, that operated on system clock speeds that were slower than the CPU’s clock speed. It also enhanced overall electrical noise immunity.


The 8080 was considered by many the first truly usable microprocessor, however competing processor architectures were emerging. During the next few years, the rise of desktop computing was being dominated by the competing Zilog Z80 CPU, which ironically was an enhanced extension of Intel's own 8080 and was designed by former Intel engineer Federico Faggin. Intel’s counter to this was the release of the 8086.


Keeping in line with the software-centric ethos, CPU support of higher level programming languages was enhanced by the addition of more robust stack instructions. In software design, commonly used pieces of code are structured into blocks called a subroutine. It may sometimes also be referred to as a function, procedure or a subprogram.


To illustrate this, let's say we made a program that finds the average of thousands of pairs of numbers. To do this efficiently, we write a block of code that takes in two numbers, calculates their average and return it. Our program now goes through the list of number pairs, calling the subroutine to perform the calculation and returning the result back to the main program sequence. The stack is used to store and transport this data and return addresses for subroutine calls.


The notable complexity of 8086 and its success had cemented Intel’s commitment to a key characteristic of its architecture - CISC or complex instruction set computer. Though a CISC architecture was used in the 8080 and its mildly enhanced successor the 8085, the 8086 marked Intel’s transition into the full-blown adoption of CISC architecture with its robust instruction set.


With only a handful of CPU’s employing it, CISC architecture is a relatively rare design choice when compared to the dominant RISC or reduced instruction set computer architecture. Even today, the x86 CPU’s remain the only mainline processors that use a CISC instruction set.


The difference between a RISC CPU and a CISC CPU lie within their respective instruction set and how its executed. RISC utilizes simple, primitive instructions while CISC employs robust, complex instructions.


Aside from adopting CISC architecture, the performance penalty of accessing memory was also combated in new ways in the 8086. 

The 8086’s performance was further enhanced by the ability to make use of the 8087, a separate floating point math co-processor


The success of the 8086 processors is synergistically linked to another runaway success in computing history. In the late 1970s, the new personal computer industry was dominated by the likes of Commodore, Atari, Apple, and the Tandy Corporation. With a projected annual growth of over 40% in the early 1980s, the personal computer market gained the attention of mainframe giant IBM lead to the launch of the IBM PC, which also paved the way for Microsoft’s dominance in the software industry, the IBM PC as the dominant personal computer, and the x86 and the primary architecture of PCs today.