Software application developers could be faced with a new internal engineering infrastructure upon which to support their applications and data if the Open Compute Project (OCP) has its way.
White PapersMore >>
- Strategy: 3 Steps to a Hands-Free Cloud
- Strategy: Smartphone Smackdown: Galaxy Note II vs. Lumia 920 vs. iPhone 5
The user-led organization, whose members include Facebook, has called for a separation and disaggregation of the core components of our computer systems. The OCP wants to see the core components of system design including processors, motherboards, and networking interconnects all disaggregated so they can be upgraded independently.
The scheme is said to be in marked contrast to the current industry trend of converged systems combining servers, storage, and networking into a single system. At this stage it is hard to say whether a more widespread user and developer backlash may surface, but one has to consider it as a strong possibility.
Convergence has gained some traction with customers in recent years due to the relative ease and speed of deployment that pre-integrated systems enable, but there are trade-offs in terms of cost and vendor lock-in. For the largest data centers, buying systems at a more granular component layer promises more flexibility, higher density, and significant cost reductions.
"Current monolithic designs can't easily be customized to fit specific workload requirements or to maximize efficiency," said John Abbott, distinguished analyst at 451 Research. "[This means that programmers and ultimately users] can't, for instance, take advantage of the latest high performance CPU without having to upgrade surrounding technologies that are still operating well."
A recent 451 Research report follows OCP's latest unveiling of two key projects to kick-start disaggregation: low-latency interconnects using silicon photonics for linking components at both the motherboard and the rack layer; and a new common slot architecture that should enable fully vendor-neutral motherboards to remain in use through multiple processor generations. Chip giant Intel has contributed its silicon photonics technology, and the Taiwanese systems maker Quanta has built a prototype to prove the concept out.
The argument here is that mega data centers could benefit from deploying their CPUs, I/O, memory, and storage in separate racks, enabling upgrades to take place independently, eliminating performance bottlenecks, and improving such operational aspects as reliability, utilization, footprint, and energy efficiency.
According to a 451 Research press statement, there's plenty of work to be done. "Standards must replace the current open hardware specifications, and these must then be married seamlessly with modular, interoperable, and stable open software stacks, tying the disaggregated components back together again through systems management products."