Evolution of Kernels
The kernel and microkernel form the code that's at the heart of an operating system. For processors with multiple operating modes or rings, the kernel or microkernel executes at the most privileged state (supervisor or kernel mode).
- The Role of the WAN in Your Hybrid Cloud
- Book Expert: Advanced Analytics with Spark: Patterns for Learning Data at Scale
- Mobile Content Management: What You Really Need to Know
- How to Transform Paper Insurance Documents into Digital Data
The microkernel was developed in part because monolithic OS kernels were becoming large and unwieldy, making them difficult to maintain. It seemed practical to implement support for new filesystems, protocol stacks, and device drivers in code that was outside the kernel and kernel address space. The microkernel enabled OS architects to develop privileged servers such as networking servers, filesystem servers, and display servers to deliver essential services without being built into the kernel. It's possible to start and stop these servers without rebooting, which makes it easier for development and testing. If a server crashes, it does not corrupt data in the kernel address space.
There have been divergent schools of thought about which is the preferred architecture, the microkernel or monolithic kernel. Dr. Andrew S. Tanenbaum of MINIX fame favors the microkernel architecture, whereas Linus Torvalds stated his preference for the monolithic kernel architecture in a 2006 "Hybrid kernel, not NT" forum post.
Torvalds felt the separation of address spaces, making it impossible to share data structures, introduced too much complexity:
"Microkernels are much harder to write and maintain exactly because of this issue."
The dichotomy of opinion among OS developers also extends to the hypervisor and virtualization community. Xen and VMWare are examples of the microkernel architecture, whereas MokaFive BareMetal is based on the monolithic model.
SQL Database Servers
Prior to the famous Tanenbaum-Torvalds debate, the SQL database community had already been weighing monolithic database architecture against client-server database processing. The latter moved database processing off shared, centralized mainframes to distributed, dedicated servers. DBMS architects also came to favor a separation between the database kernel and services, plug-ins, or extenders.
This was noticeable over time as we saw SQL platforms evolve in a Swiss Army Knife manner. Instead of adding replication, for example, to the core database engine, it made more sense to write a replication server.
When you install a high-end SQL product, it uses a layered architecture that brings up a number of servers at startup time. The layers of services in a DBMS can include storage management, network access, lock management, replication, security, query, cache management, memory management, OLAP services, and so on.
For a product such as Oracle Database, the kernel is the core of the server process. Network communications map to a layer known as the Transparent Network Substrate (TNS), which was designed to support heterogeneous connectivity. An Oracle server runs a TNS listener for processing database requests using client-server protocols. Listeners are linked to end points (port numbers) for HTTP, FTP, and XML DB requests. The listener forwards requests to either a shared server or a dedicated server process. The kernel uses background processes, such as the Process Monitor, System Monitor, Log Writer, and Database Writer. It also uses slave processes working in the background, such as I/O Slave Processes and Parallel Query Slaves.
One benefit of the Oracle kernel architecture is being able to provide a top-level trace. For debugging and tuning queries, it helps you see what's happening during execution of a query, from the kernel down to other processes.