Multi-core Performance In Single-Core Using Multi-threaded Virtual Multiprocessors: Part 1

In Part 1, the authors describe the basics of multi-threading and virtual multiprocessing and how the MIPS MT and Application Specific Instruction Extensions approach is similar to and different from traditional multithreading approaches.


November 20, 2006
URL:http://www.drdobbs.com/parallel/multi-core-performance-in-single-core-us/194500234

Designers everywhere face ever increasing constraints on system cost and power consumption, while at the same time being required to add more performance and functionality to their designs. This is a difficult trade-off to address successfully.

Some previous approaches have been to ramp up the clock speed of a processor, but this usually results in increased power consumption. Additionally, memory performance has not kept pace with processor technology (see Figure 1, below), and this mismatch limits any significant gains in system performance. Consequently the higher-frequency approach has led to diminishing returns.

Figure 1 Processor-memory mismatch causes system performance bottlenecks

A multi-core system is another option, but this suffers from a larger die area and higher cost. Any increase in performance comes at a fairly substantial cost in silicon and system power consumption. Multiple-issue processors with two or more execution units offer another option, but they struggle to make best use of hardware resources, and also have an area penalty. Additionally, the software has to be revised in many cases to make best use of the multiple pipelines.

Multi-threading on a Single Core
Supporting multiple software threads on a single processor core offers the benefits of these traditional approaches without any of the associated disadvantages. While multi-threading is gaining traction in the server and desktop markets it has not yet been optimized for the embedded market.

Figure 2: Pipelines can stall easily, slowing down applications

One of the main problems with traditional single-threaded processors is that the execution pipeline will stall for many reasons including cache misses, branch mis-predicts and other pipeline interlocking events (Figure 2, above). The key to obtaining the maximum performance from any processor core is controlling the way the threads are executed in the pipeline(Figure 3, below).

Figure 3: Improving pipeline efficiency with multiple threads

To resolve such problems, without resorting to a totally new architecture, the MIPS32 34K core uses a new Virtual Processing Element approach, supported by carefully chosen application-specific extensions to the MIPS Instruction Set Architecture. This allows efficient multi-threaded use of the 34K core's nine stage execution pipeline (Figure 4 below) coupled with a small amount of hardware to handle the virtual processors, the thread contexts and the quality of service (QoS) prioritization.

Figure 4: 34K's nine stage pipeline

The VPE hardware
As illustrated in Figure 5 below, each thread has its own dedicated hardware, called the thread context (TC). This allows each thread to have its own instruction buffer with pre-fetching so that the processor can switch between threads on a cycle-by-cycle basis to keep the pipeline as full as possible. All this avoids the costly overheads of context switching.

Figure 5: 34K processor design

Each TC has its own set of general-purpose registers and a PC (program counter) that allows a TC to run a thread from a complex operating system such as Linux. A TC also shares resources with other TCs, particularly the CP0 registers used by the privileged code in an OS kernel.

The set of shared CP0 registers and the TCs affiliated with them make up a VPE (Virtual Processing Element). A VPE running one thread (i.e. with one TC) looks exactly like an independent MIPS CPU, and is fully compliant with the MIPS32 architecture specification.

All threads (in either VPE) share the same caches, so cache coherency is inherently maintained. This eliminates the problem in multi-core and multiprocessor systems, where many cycles and additional logic are used to manage the different processors and ensure cache-coherency.

Depending on the application requirements, the 34K core can be configured for up to nine TCs that are supported across a maximum of two VPEs. It is this combination of the VPEs with the TCs that provides the most area-efficient and flexible solution. For example, one VPE could be configured to run a complete real-time operating system (RTOS) or data plane application for DSPs, while the other could run Linux or a control plane application.

Alternatively, the processor could also be configured for VSMP (virtual symmetric multiprocessing) offering significantly higher application throughput with only a very small increase in die-area. In both of these scenarios, a single 34K core replaces multiple discrete processors.

Quality of Service (QoS)
The QoS engine picks instructions from runnable threads using a weighted round-robin algorithm, interleaving instructions on a cycle-by-cycle basis. For maximum overall application throughput, the processing bandwidth is shared finely between the different threads, thus using each other's processing "gaps."

Alternatively, it can also achieve QoS for real-time tasks such as communications, video and audio processing by allocating dedicated processing bandwidth to specific thread(s).

The QoS is handled with a hierarchical approach (see Figure 6, below) where the user can program various levels of processing bandwidth to the available TCs. Based on this allocated bandwidth, the integrated Policy Manager assigns priorities to the individual TCs, constantly monitors the progress of the threads, and provides critical "hints" to the Dispatch Scheduler as needed. The Dispatch Scheduler in turn, schedules the threads to the execution unit on a cycle-by-cycle basis, ensuring that the QoS requirements are met.

Figure 6: QoS hierarchy

Why the new approach to multi-threading?
As processor operating frequency increases, it becomes increasingly difficult to hide latencies inherent in the operation of a computer system. A high-end synthesizable core taking 25 cache misses per thousand instructions (a plausible value for "multimedia" code) could be stalled more than 50% of the time if it has to wait 50 cycles for a cache fill.

More generally, individual computer instructions have specific semantics, such that different classes of instructions require different resources to perform the desired operation. Integer loads don't exploit the logic or registers of a floating-point unit, any more than register shifts require the resources of a load/store unit.

No single instruction consumes all of a computer's resources, and the proportion of the total system resources that is used by the average instruction diminishes as one adds more pipeline stages and parallel functional units to high-performance designs.

Multi-threading arises in large measure from the notion that, if a single sequential program is fundamentally unable to make fully efficient use of a processor's resources, the processor should be able to share some of those resources among multiple concurrent threads of program execution.

The result does not necessarily make any particular program execute more quickly - indeed, some multi-threading schemes actually degrade the performance of a single thread of program execution - but it allows a collection of concurrent instruction streams to run in less time and/or on a smaller number of processors.

Multi-threading can provide benefits beyond improved multitasking throughput, however. Binding program threads to critical events can reduce event response time, and thread-level parallelism can, in principle, be exploited within a single application program to improve absolute performance.

Varieties of Multi-threading
There are a number of implementation models for multithreading that have been proposed, some of which have been implemented commercially.

Interleaved Multi-threading is a TDM-style approach which switches from one thread to another on each instruction issued. Interleaved multi-threading assures some degree of "fairness" in scheduling threads, but implementations which do static allocation of issue slots to threads generally limit the performance of a single program thread. Dynamic interleaving ameliorates this problem, but is more complex to implement.

The diagram in Figure 7 below shows how instructions from program threads "A" and "B" might be issued. In the classical scalar RISC case, we see 4 consecutive instructions from program A being issued, with one wasted cycle due to a pipeline stall.

Figure 7: Interleaved Multithreading

A statically interleaved scheme which alternates between two threads might only be able to issue three instructions in the same amount of time, if A is the only thread available to run, as shown in the left-hand column, but if A and B are both runnable, as shown in the right hand column, the alternance between the two fills the pipeline and hides the stall in A. Using a dynamic interleaving scheme allows a single thread to run without degradation relative to the scalar RISC case, while still achieving better efficiency if multiple threads can be run.

Blocked multi-threading issues consecutive instructions from a single program thread until some designated blocking event, such as a cache miss, causes that thread to be suspended and another thread activated.

The diagram in Figure 8 below shows how instructions from program threads "A" and "B" might be issued in a blocked multi-threading system, relative to a scalar RISC. The scalar RISC stalls for as long as it takes to perform the cache refill, shown here as a wildly optimistic three cycles.

Figure 8: Blocked Multi-threading

A blocked multi-threading design might behave identically to the scalar RISC if there is no other thread to run, but given two threads, the blocked multi-threading processor switches from thread A to thread B as soon as thread A encounters the major stall. Note that the thread switch may not be instantaneous, and that while thread A is runnable on the last cycle shown, thread B retains the processor, since it has not yet been blocked.

Because blocked multi-threading changes threads less frequently, its implementation can be simplified. On the other hand, it is less "fair" in scheduling threads. A single thread can monopolize the processor for a long time if it is lucky enough to find all of its data in the cache.Hybrid scheduling schemes combining elements of blocked and interleaved multi-threading have also been built and studied.

Simultaneous Multi-threading is a scheme implemented on superscalar processors wherein instructions from different threads can be issued concurrently.The diagram in Figure 9  below shows a superscalar RISC, issuing up to two instructions per cycle, and a simultaneously multi-threaded superscalar pipeline, issuing up to two instructions per cycle from either of the two threads.

Those cycles where dependencies or stalls prevented full utilization of the processor by a single program thread are filled by issuing instructions for another thread.

Simultaneous multi-threading is thus a very powerful technique for recovering lost efficiency in superscalar pipelines. It is also the most complex multi-threading system to implement. More than one thread may be active at a given pipeline stage on a given cycle, complicating the implementation of memory access protection, etc.

Figure 9: Simultaneous Multithreading

Multi-threading versus Multicore/multiprocessors
Multi-threading and multiprocessing are closely related. Indeed, one could argue that the difference is one of degree: Whereas multiprocessors share only memory and/or connectivity, multi-threaded processors share those, but also share instruction fetch and issue logic, and potentially other processor resources.

In a single multi-threaded processor, the various threads compete for issue slots and other resources, which limits parallelism. Some "multi-threaded" programming and architectural models assume that new threads are assigned to distinct processors, to execute fully in parallel.

In very-high-end processors like the Intel P4, the throughput improvement from the limited parallelism provided by a multi-threaded processor seems to be quite good relative to the incremental silicon cost. Figures like 65% more throughput in return for 5% more silicon have been claimed. It must be understood, however, that the silicon cost of multi-threading can be much higher as a percentage of total area in a small embedded core, relative to a Pentium 4-class processor.

In the light of all this, the approach used in the MIPS MT ASE strives to provide a framework both for the management of parallel threads on the same CPU and for the management of parallel threads across multiple cores, and indeed for the migration of threads from one multi-threaded processor to another.

The Virtual Multiprocessor Approach
Mainstream operating systems technology understands symmetric multiprocessing (SMP) reasonably well. Several Microsoft operating systems support SMP platforms, as does Linux. "Multi-threaded" applications exist which exploit the parallelism of such platforms, using "heavyweight" threads provided by the operating system.

The virtual multiprocessor (VMP) model of the proposed MIPS multi-threading ASE is designed to provide maximum leverage to this technology. A multi-threaded processor, configured as two single-threaded VPEs, is indistinguishable to applications software from a 2-way SMP multiprocessor. The operating system would have no need to use any of the new instructions or privileged resources defined by the ASE. Only in the case of a dynamically configurable VMP would logic need to be added to the boot code to set up the various VPEs.

Each MIPS MT TC has its own interrupt "exemption" bit and its own MMU address space identifier (ASID), which allows operating systems to be modified or written to use a "symmetric multi-TC" (SMTC) model, wherein each TC is treated as an independent "processor". Because multiple TCs may share the privileged resources of a single VPE, an SMTC operating system requires additional logic and complexity to coordinate the use of the shared resources, relative to a standard MIPS32/MIPS64 OS, but the SMTC model allows SMP-like concurrency up to the limit of available TCs.

While the default configuration of multiple VPEs in a MIPS MT processor provides each VPE with an independently programmable MMU, such that legacy SMP memory management code will work correctly, it is possible for software to configure the processor to share MMU TLB resources across all VPEs. This requires a level of coordination between "CPUs" (really TCs or VPEs) that is not present in legacy SMP operating systems, but allows for an advanced SMP/SMTC operating system to achieve a more favorable TLB miss rate.

Master/Slave VPEs: How they accomplish their tasks
One or more VPEs on a processor may power-up as "master" VPEs, indicated by the MVP field of the VPConf0 register. A master VPE can access the registers of other VPEs by using MTTR/MFTR instructions, and can, via a DVPE instruction, suspend all other VPEs in a processor. (More on these instructions and registers later, in Part 2 of this series).

This Master/Slave model allows a multi-tasking master "application processor" VPE running an operating system such as Linux to dispatch real-time processing tasks on another VPE on behalf of various applications. While this could be done using an SMP paradigm, handing work off from one OS to another, MIPS MT also allows this to be done more directly.

A master VPE can take control of another VPE of the same processor at any time. This is enabled through the use of a special DVPE instruction, which suspends all threads of a core except the one that executed the instruction.

Once a DVPE instruction has been issued by the master VPE, the slave VPE's CP0 privileged resource state can be set up as needed using MTTR instructions targeting TCs that are bound to the slave VPE, the necessary instructions and data can be set up in memory visible to the slave VPE, one or more TCs of the slave VPE can be initialized using MTTR instructions to set up their TCRestart addresses (and indeed their GPR register values, if appropriate), and the slave VPE can be dispatched to begin execution using the configured TCs by the master VPE executing an EVPE instruction.

Threads versus VPEs
Why the distinction between threads and VPEs? Because there are two ways for software to approach multi-threading, one which is easy, but relatively expensive in silicon support and limited in the leverage provided to applications, and another which is more difficult to program, but which provides leverage for finer degrees of parallelism at a lower cost in silicon.

VPE Parallelism is equivalent to symmetric multiprocessor (SMP) parallelism. This means that operating systems which know how to deal with SMP system configurations can easily be adapted to run multi-VPE cores, and that programs already written using SMP multi-threading or multi-tasking can exploit VPE parallelism.

Thread parallelism in the context of the proposed ASE refers to fine-grained, explicitly controlled thread parallelism. This requires new OS/library/compiler support, but takes full advantage of low thread creation and destruction overhead to exploit parallelism at a granularity that would otherwise be impractical. The hardware support requirement for a TC is less than that of a VPE, so more TCs can be instantiated per unit of chip area.

Implementation and Instantiation of Thread Contexts
The "name space" for TCs in MIPS MT is flat, with thread numbers ranging from 0 to 255. This does not mean that 256 TCs need to be implemented, nor that the full implemented compliment of architecturally visible threads needs to be instantiated as high speed, multi-ported register files. Designers may implement a hierarchy of thread storage, provided that the software semantics of the ASE are respected. A typical implementation would be a flat structure of 4-8 TCs.

Figure 10: Thread resources superimposed on thread scheduling state machine

It is important to distinguish between the software-visible state of a TC as defined by the MIPS MT ASE, and the hardware state associated with the selection and scheduling of runnable threads.

As seen by software, a MIPS MT TC may be in either a free or an activated allocation state, and independent of its allocation state, it may be halted, but an activated TC should not be confused with a "running" thread, though a running thread must have an activated context, nor should a halted TC be confused with a "waiting" thread. The diagram in Figure 10, above shows the TC resource states superimposed on an implementation's thread scheduling state machine.

Next in Part 2: The MIPS MT Application Specific Extensions and how to use them.

Kevin D. Kissell is Principal Architect at MIPS Technologies, and has been a part of the MIPS architecture development team since 1997. He was first involved in the architecture and design of RISC microprocessors when he joined the original Fairchild Clipper design team in 1983. In between, Kevin has been variously responsible for processor, systems and software architecture, for decoupled access/execute supercomputers at ACRI, massively parallel distributed memory computers at nCUBE, and large-scale shared-memory supercomputers at Evans & Sutherland. His prior work at MIPS includes having been principal architect of the SmartMIPS extensions for smart cards.

Peter Del Vecchio is a Product Marketing Manager at MIPS Technologies, responsible for the MIPS32 24K, 24KE and 34K core families. Peter began his career at Sun Microsystems, where he worked in Engineering on the SuperSPARC and UltraSPARC processors. He then joined CompCore Multimedia, a startup focused on IP licensing for audio and video technology, and worked at Mobilygen Corp. as the Director of Product Marketing.

To learn more about the subject of multicore and multithreaded architectures on Embedded.com, go to "More About Multicores, Multiprocessors and Multithreading."

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.