Evolution of CPU Processing Power | 3 - The Origin of Modern Operating Systems

During the 1960s into the 1970s, the multitasking paradigm was gaining traction in the mainframe world.

Initially, the concept was implemented in a cruder form known as multiprogramming. Multiprogramming was accomplished by processing programs in batches, jumping between them during regions of code that wait for hardware input. This would eventual evolving into time-sharing.


By the late 1960s, true multitasking started to emerge in operating systems such as DEC’s PDP-6, IBM’s OS/360 MFT, and MULTICS. MULTICS would heavily influence the development of UNIX.


In a traditional single process environment, the program being executing generally has full control of the CPU and its resources. This creates issues with efficient CPU utilization, stability, and security as software grows more complex.


In multitasking, CPU focus is shuffled between concurrently running processes.  


Cooperative multitasking was used by many early multitasking operating systems. Whenever a process is given CPU focus by the operating system, it relies on the process itself to return control back. 


Preemptive multitasking solved the stability problems of cooperative multitasking by reliably guaranteeing each process a regular period or “time-slice” of CPU focus.


 We also need a way to prevent a process from using memory allocated to another process but also allow them to communicate with each other safely. The solution to this is a layer of hardware dedicated to the task in between the CPU and RAM called a memory management unit or MMU. 


 If a process attempts to access memory outside of protection rules a hardware fault is triggered.


One some MMU’s the concept of memory access privileging is incorporated into memory management. By assigning levels of privilege to regions of memory, it becomes impossible for a process to access code or data above its own privilege level. This creates a trust mechanism in which less trusted, lower privilege code cannot tamper with more trusted, critical code or memory.


Virtual memory is a memory management technique that provides an abstraction layer of the storage resources available on a system. While virtual memory comes in various implementations, they all fundamentally function by mapping memory access from logical locations to a physical one.


In January of 1983, Apple released the Lisa. It would soon be overshadowed by the release of the Apple MacIntosh one year later. The Macintosh product line would eventually grow dramatically over the years. The Macintosh ran on the Motorola 68K CPU.


What made the 68K so powerful was its early adoption of a 32-bit internal architecture. However, the 68k was not considered a true 32bit processor but more of a hybrid 32/16 processor. Despite these limitations, it proved to be a very capable processor.


Despite these limitations, the 68k did support a simple form of privileging that made hardware facilitated multitasking possible. The 68K always operates in one of two privilege states the user state or the supervisor state. 


 By the end of 1984, IBM took its next step forward with the release of its second generation of personal computer, the IBM PC AT. 


 Among some of the new software developed for the AT was a project by Microsoft called Windows. With initial development beginning in 1981, Windows 1.0 made its first public debut on November 10, 1983.


The 80286 was groundbreaking at that time in that it was the first mass-produced processor that directly supported multiuser systems with multitasking. 


The first was the elimination of multiplexing on both data and address buses.


The second advancement was the moving of memory addressing control into a dedicated block of hardware. 


The third major enhancement was an improved prefetch unit. Known as its instruction unit the 80286 would begin decoding up to 3 instructions from its 8-byte prefetch queue. 


The 80286 was capable of addressing 24 bits of memory, or 16MB of RAM making the 8086 memory model insufficient.


To make use of the full 16MB as well as facilitate multitasking, the 80286 could also operate in a state known as protected mode.


Segment descriptors provide a security framework by allowing write protection for data segments and read protection for code segments. If segment rules are violated an exception occurs, forcing an interrupt trigger of operating system code.


The 80286’s MMU tracked all segments in two tables. The global descriptor table or GDT and the local descriptor table or LDT 

 which combined could potentially address up to 1GB of virtual memory.


The interrupt structure of protected mode is very different from real mode in that it has a table of its own, known as the interrupt descriptor table.