Internally, computers are undergoing cyclical evolutionary changes as well: CPUs gradually evolved from spending tens if not hundreds of cycles on individual instruction to just one cycle per instruction (scalar architecture, for example). Then introduction of additional execution units allowed CPUs processing several instructions per clock cycle thus exploiting instruction level parallelism (superscalar architecture, for instance). Later several CPUs were crammed on motherboard (multi-processor architecture). Now several CPUs are fused together in a single multi-core package, and several such multi-core chips can be installed on a single motherboard. So in the end a data center is filled with clusters of stacks of server blades with each blade sporting one or more multi-core super-scalar CPUs. So we have at least five different levels of integration:
- Execution unit
with four levels typically found on desktop PCs. But why do we need this complexity and how did it come into being?
As computer clock speed increased from kilohertz to gigahertz so did our imagination and understanding of what can be done with this computational power to serve our needs; for example, provide entertainment (at home) and boost productivity (which is a practical reason for computers in the workplace). Originally computers were designed to serve a clearly defined special purpose and therefore were meant to perform specific single task. When computer power grew beyond immediate needs multi-tasking was invented to allow multiple users access to the spare computational resources. But when the hardware costs were reduced to consumer level personal computers came out, and they quite naturally were designed to be single-tasking.
The first mass-produced personal computers were quite slow and the need for performance increase of microcomputer processors was justified at first: We wanted good response from desktop applications and occasionally we wanted to play arcade games. And 4.77 MHz of PC XT was not always good enough for the purpose. So as CPU power grew to meet specific tasks we wanted our PCs to perform it became too much for general tasks such as text editing or spread-sheeting. That extra power just as in the case of old mainframes led to the adoption of multi-tasking operating systems on desktop and personal computers. We had extra power and we wanted to do something with it.
Ironically, mass adoption of multi-tasking operating system on PCs (Microsoft Windows, for instance) coincided with the introduction of graphical user interface. Thus formerly more or less satisfactory CPU performance became vastly inadequate and spurred a race to increase CPU performance necessary to compensate for inefficient software. The whole transition from single-threading text-based DOS programs to graphic interfaces and multithreading Windows resulted in unprecedented bloating of software code and general system slow down due to lacking graphic and disk I/O performance. Erroneous programming paradigms such as dynamically loaded libraries, dynamic memory allocation, shared components, inefficient object-oriented programming, and multi-layered libraries also greatly contributed to the slow down. All these inefficiencies instantly justified the need for further CPU performance increases. Now we needed faster computers just to run our operating system and new versions of old software burdened with graphic user interfaces.
Thus paradoxically desktop computers of '80s initiated a major leap in Wirth's Law which states that software is getting slower faster than computers are getting faster. Perhaps the first loop of Wirth's Law spiral was objective: Initial CGA and EGA hardware and CPU performance of 12-16 MHz was barely enough for running programs with complicated graphic interfaces. However, further unraveling of Wirth's Law was completely subjective in the sense that slowdown of software that occurred further resulted from our attempts to boost programmer productivity by employing various "coding techniques" that promised simplicity at the cost of efficiency.