Conclusion
There is another compelling reason to believe that big, hot, and insanely fast CPUs will die out due to natural selection. As people become more and more aware of "green" concepts and conscious of power consumption our eyes will finally open to extremely bloated code that out GHz-rated CPUs execute at the same rate as MHz-rated processors in specialized devices. The proper question to ask would be "How much power does your software require?", where power means electricity with the implication of the high energy cost. Indeed that would mean that slow and bloated software is expensive software for it requires CPU to run at full blast. To make this point more clear think of a datacenter with a thousand blade servers with each server sporting several CPUs and hard disks. Bloated and slow software that we have today implies that the operating cost of the datacenter is high for it needs a thousand blade servers, thousand terabyte disks, and gigabytes and gigabytes of memory with cooling and power cost of 10 million a year. Now let's say if we are to optimize our software to reduce RAM, disk, and CPU performance requirements by an order of magnitude (which is easily achieved if we scrap interpreted and otherwise "managed" code with inefficient memory management model and multi-layered libraries and invest in compiler and optimizer development) and reduce the number of servers 10 times? Or instead replacing huge blade servers with gigahertz CPUs with compact pocket-size microblades outfitted with megahertz-rated CPUs, few megabytes of RAM and a microdrive?
Needless to say, there is amble room for software optimization that has been ignored for decades since the increases in CPU performance allowed us to neglect it. Yet now the situation with energy resources is such that slow and bloated software means higher costs both in direct electric power required by CPU to process it and indirectly in power consumed by RAM, enormous hard drives and cumulative cooling costs. Furthermore recent tendency to aggregate multiple software components running on shared computational resource (that is, a server) under control of a multitasking OS should be reversed in favor of completely isolated software components running on low-power dedicated hardware. Thus if we are to begin optimizing our code we are likely to see blade server racks replaced with microblade server racks where each microblade is performing a dedicated task, consuming less power; and where the total number of microblades is much greater than the number of initial "macro" blades.
Indeed such complete isolation of software components (database instances, web applications, network services, and the like) that are currently squeezed together on the same server should greatly improve system robustness due to the possibility of real-time component hot-swap or upgrade and completely eliminating software installation, deployment and patch conflicts that plague large servers of today.
When and if that happens depends on two factors energy costs and code optimization efficiency. The former drives the latter. Therefore further increase in energy prices is likely to result in gradual reduction of the role of CPU in computer system, more optimized code and return towards single-processor/single-task special-purpose computing paradigm. On the other hand this vision may never materialize if a technological breakthrough occurs on manufacturing side that would allow further CPU speed increases without the increased energy dissipation (quantum computing, advances in superconductors, photonics, and so on). However, one thing is clear -- the role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons.