Specialty Hardware, Databases, Mainframes, and Service-Oriented Architecture (SOA)
If I were writing a book about applying hardware to performance problems, it would include a section on specialty machines, such as servers for analytics and database machines. It would also describe augmenting a computer's capabilities by using specialized processors and dedicated chips, for purposes such as database processing or graphics. With our long history of enthusiasm for turbo-charging systems, it's no surprise today's system architects look to hardware solutions to streamline query execution, analytics, web services performance and other processing in a service-oriented architecture (SOA).Computer hardware designers have known for decades that dedicated co-processors or chips can improve overall performance of a computer system. And they've used microcoded instruction sets for flexibility and performance, such as implementing a virtual machine in firmware instead of software.
A History of Reducing the CPU's Workload Auxiliary processors gained traction before networking arrived to gave us the ability to distribute work across connected machines. In the 1960s, IBM provided I/O channel controllers to lighten the workload of the central processing unit (CPU) of its System 360 mainframes. It also used microcode to implement the instruction set of the various models of the 360 family. In the 1970s, Digital Equipment Corporation augmented the legendary PDP-11 minicomputers with a floating-point processor and introduced the PDP-11/60, with a user-programmable writable control store for custom machine instructions. Digital also bundled a PDP-11 inside its DECsystem-10 as a communications front-end for the mainframe. It was during the 1970s that system designers sought the fastest hardware they could find for the real-time challenge of ballistic missile defense. They selected the CDC 7600, designed by the industry icon Seymour Cray. It supported instruction pipelining and included ten smaller processors as Peripheral Processing Units.
A decade later we saw Jay Miner came up with the innovative design of a low-cost multimedia computer, the Commodore Amiga. Miner used legendary chips (Agnus, Denise, Paula) that delivered blazing performance for tasks such as bit blitting, video and audio processing. Also in the 1980s we saw interest in database machines. Two of the earliest efforts to handle large databases were from Britton-Lee and Teradata. At the other end of the spectrum was an expansion board for PCs, although the MIPS rating of that machine would get laughs from users of today's Oracle Exadata Database Machine.
Whither the Mainframe? In an era of distributed computing, when we've become accustomed to throwing server hardware at performance bottlenecks, it's interesting that big iron has not faded from the scene. It's been more than two decades since my first project for a client who was installing networks and moving away from mainframe-centric computing. Within a few years, there were forecasts of the imminent demise of the mainframe as a wave of organizations bought networks, servers and desktop computers.
IBM was described as a failure in the 1980s and early 1990s because of the perception that mainframes were dinosaurs. By 1993, IDC forecast a 2-4% rate of decline in the mainframe market. But decades have passed and the mainframe is not extinct. In fact, the recent third-quarter financials for IBM show sales of System z mainframes increased 15%.
So the mainframe has staying power. One reason is that many mission-critical applications, such as online transaction processing (OLTP), were originally written for a mainframe using legacy databases and the COBOL programming language. Another is the mainframe can act as a big, fast, secure server and play a feature role in a service-oriented architecture (SOA).
IBM has taken an interesting approach to providing specialty processors with its System z9 mainframe computers. It uses the same hardware for the specialty processors as it does for a mainframe's CP (Central Processor), but it downloads different microcode into the specialty processors.
Microcode (Firmware) Microprogramming and nanoprogramming were solutions that became attractive in the 1970s for giving machines different personalities and optimized instruction sets. One of my projects during that era was doing a system generation for a midsized IBM mainframe on a refrigerator-sized Nanodata QM-1. The QM-1 was a machine targeted at a community with microprogramming needs: universities, labs and computer manufacturers engaged in developing new or customized hardware. A common use of the QM-1 was to write the nanocode and microcode for the new computer before building a prototype. One of the first implementations of a concurrent programming language, Per Brinch Hansen's Concurrent Pascal, was done in silicon by a TRW researcher using a QM-1. Implementation at the microcode or nanocode level is quite different from a software implementation, such as interpreters for the UCSD p-system or today's Java VM. It's more akin to picoJava, Jazelle DBX, JOP, the Azul Vega 3 and other efforts that implement Java bytecode in hardware.
A Brief History of Microprogramming provides a good definition of microprogramming, except today microcode is not limited to the central processing unit (CPU).
microprogramming is a systematic technique for implementing the control logic of a computer's central processing unit. It is a form of stored-program logic that substitutes for hardwired control circuitry.
Next: Specialty ProcessorsWith our long history of enthusiasm for turbo-charging systems, it's no surprise today's system architects look to hardware solutions to streamline query execution, analytics, web services performance and other processing in a service-oriented architecture (SOA).