Channels ▼
RSS

Parallel

The Future of Computing


Evolutionary Changes

Internally, computers are undergoing cyclical evolutionary changes as well: CPUs gradually evolved from spending tens if not hundreds of cycles on individual instruction to just one cycle per instruction (scalar architecture, for example). Then introduction of additional execution units allowed CPUs processing several instructions per clock cycle thus exploiting instruction level parallelism (superscalar architecture, for instance). Later several CPUs were crammed on motherboard (multi-processor architecture). Now several CPUs are fused together in a single multi-core package, and several such multi-core chips can be installed on a single motherboard. So in the end a data center is filled with clusters of stacks of server blades with each blade sporting one or more multi-core super-scalar CPUs. So we have at least five different levels of integration:

  • Cluster
  • Server
  • CPU
  • Core
  • Execution unit

with four levels typically found on desktop PCs. But why do we need this complexity and how did it come into being?

As computer clock speed increased from kilohertz to gigahertz so did our imagination and understanding of what can be done with this computational power to serve our needs; for example, provide entertainment (at home) and boost productivity (which is a practical reason for computers in the workplace). Originally computers were designed to serve a clearly defined special purpose and therefore were meant to perform specific single task. When computer power grew beyond immediate needs multi-tasking was invented to allow multiple users access to the spare computational resources. But when the hardware costs were reduced to consumer level personal computers came out, and they quite naturally were designed to be single-tasking.

The first mass-produced personal computers were quite slow and the need for performance increase of microcomputer processors was justified at first: We wanted good response from desktop applications and occasionally we wanted to play arcade games. And 4.77 MHz of PC XT was not always good enough for the purpose. So as CPU power grew to meet specific tasks we wanted our PCs to perform it became too much for general tasks such as text editing or spread-sheeting. That extra power just as in the case of old mainframes led to the adoption of multi-tasking operating systems on desktop and personal computers. We had extra power and we wanted to do something with it.

Ironically, mass adoption of multi-tasking operating system on PCs (Microsoft Windows, for instance) coincided with the introduction of graphical user interface. Thus formerly more or less satisfactory CPU performance became vastly inadequate and spurred a race to increase CPU performance necessary to compensate for inefficient software. The whole transition from single-threading text-based DOS programs to graphic interfaces and multithreading Windows resulted in unprecedented bloating of software code and general system slow down due to lacking graphic and disk I/O performance. Erroneous programming paradigms such as dynamically loaded libraries, dynamic memory allocation, shared components, inefficient object-oriented programming, and multi-layered libraries also greatly contributed to the slow down. All these inefficiencies instantly justified the need for further CPU performance increases. Now we needed faster computers just to run our operating system and new versions of old software burdened with graphic user interfaces.

Thus paradoxically desktop computers of '80s initiated a major leap in Wirth's Law which states that software is getting slower faster than computers are getting faster. Perhaps the first loop of Wirth's Law spiral was objective: Initial CGA and EGA hardware and CPU performance of 12-16 MHz was barely enough for running programs with complicated graphic interfaces. However, further unraveling of Wirth's Law was completely subjective in the sense that slowdown of software that occurred further resulted from our attempts to boost programmer productivity by employing various "coding techniques" that promised simplicity at the cost of efficiency.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video