A four-lane SRIO link running at 3.125 Gbps can deliver 10 Gbps throughput with full data integrity. SRIO is similar to microprocessor buses—it implements memory and device addressing as well as packet processing in hardware. This allows significantly lower I/O processing overhead, lower latency and increased system bandwidth relative to other bus interfaces. But unlike most other bus interfaces, SRIO has low pin count interfaces and scalable bandwidth based on high speed serial links, which can scale from 1.25 - 3.125 Gbps. Figure 4 illustrates the SRIO specification.

(Click to enlarge)
Figure 4. SRIO specification
Computing resources in platforms:
With the availability of configurable processing resources, developers are implementing applications in hardware. For example, data compression and encryption algorithms, even complete firewall and security applications, which which were previously implemented in software, are now implemented in hardware. These hardware implementations demand a massive parallel ecosystem of shared bandwidth and processing power. They require shared or distributed processing through CPUs, NPUs, FPGAs, and/or ASICs. Some of the computing resource requirements for building such a system include:
- Distributed processing supporting complex topologies
- Direct peer-to-peer communication with high reliability
- Multiple heterogeneous OS's
- Ability to support communications data plane using multiple heterogeneous OS's
- Availability of modular and extendable platforms that have broad ecosystem support
The SRIO protocol was architected and specified to support the disparate requirements of compute devices in the embedded and wireless infrastructure space. SRIO makes it possible to achieve architectural independence, the ability to deploy scalable systems with carrier grade reliability, advanced traffic management, and provisioning for high performance and throughput. In addition, a broad ecosystem of vendors makes it easy to build SRIO systems with off-the-shelf components. SRIO is a packet-based protocol that supports:
- Data movement using packet-based operations (read, write, message)
- I/O non-coherent functions and cache coherence functions
- Efficient interworking and protocol encapsulation through support for data streaming, and SAR functions
- A traffic management framework, by enabling millions of streams, support for 256 traffic classes, and lossy operations
- Flow control to support for multiple transaction request flows including provision for QoS
- Priorities support to alleviate problems like bandwidth allocation, transaction ordering, and deadlock avoidance
- Topology support for standard (trees and meshes) and arbitrary (daisy-chains) hardware topologies through system discovery, configuration and bring-up including support for multiple hosts
- Error management and classification (recoverable, notification and fatal)
IP Solutions for Serial RapidIO
To support fully compliant maximum-payload operations for both sourcing and receiving user data through target and initiator interfaces on the Logical (I/O) and Transport Layer IP, vendors like Xilinx offer endpoint IP solutions for Serial RapidIO designed to the latest RapidIO Specification v1.3.
The complete Xilinx Endpoint IP solution for SRIO is shown in Figure 5. It consists of the following components:
- LogiCORE RapidIO Logical (I/O) and Transport Layer IP
- Buffer Layer Reference Design
- LogiCORE Serial RapidIO Physical Layer IP
- Register Manager Reference Design

Figure 5. Xilinx Endpoint IP architecture for SRIO