The Object of My Correction ... (Part 1)
In our last blog we talked about 'one-size-fits-all-tool' that has to determine scheduling policies, and system and process wide contention scopes. When there are more threads than there are cores or CPUs, the threads have to wait their turn, and the scheduling policy determines which thread is next in line. Contention scopes determine which threads a thread competes with to utilize the core/CPU.There are process and system contention scopes. Threads that have "process contention scope" compete with threads of the same process. Threads with system contention scope compete with threads of different processes system-wide. With process contention scope, a user-level thread is mapped to kernel-level threads by the thread library. The kernel-level threads are unbound therefore can be mapped to one or more user-level threads. This is a M:1 thread model. The kernel schedules the kernel threads to processors according to their scheduling attributes.
With a system contention scope, a kernel-level thread is bound to a single user[level thread throughout the lifetime of the thread. The kernel threads are solely responsible for scheduling thread execution. This model schedules all threads against all other threads in the system. It has a 1:1 thread model.
So what does this means when processes run with either scheduling model? Well it means allot. With a process contention scope, the priority of the thread plays a big factor when a thread is assigned the processor (core). A thread of a single process that has a higher priority than the other threads of that process will delay the execution of the thread, leading to starvation. Consider what's going on here. Two processes P1 and P2 have several threads. P1 has threads with process scope and P2 has threads with system contention scope. Now which process will require more execution time? The threads of P1 has to contend with all of the threads of that process, the library scheduler will select one thread from the group to run (depending on scheduling policy and priority). Where the threads of P2, each has its own kernel-level thread and will demand access to cores/CPU by the kernel. So the threads of P2 will require more execution time.
I mentioned the "starvation" disadvantage of process contention scope. There is another. They will not be able to take advantage of multiple CPU or multicores. There are some advantages though. The process is not making system calls in order to create the threads. The OS is not involved in creating these threads so that process will run faster, more threads can be created without overloading the system. And it's more scalable and portable.
Threads with system contention scope requires kernel overhead, system call processing and maintenance of a kernel data structure for each kernel-level thread (and thus each user-level thread). This means it's less portable and scalable. If the process requires allot of threads, this can degrade the overall performance of the system and affect other running applications. Sounds familiar? But on the other hand, the threads of your process is competing with other threads from other processes and many of them could be running simultaneously when assigned to cores/CPUs. And that's what we want! This is on general how the process and system contention scope works and affect how a process's threads will utilize the cores/CPUs. But there are nuances between operating systems' implementations of contention scope. For example, some OS do not support process contention scope.
There is a third thread model that is a hybrid or mix of these contention scopes. The M:N thread model has a two-level scheduler. The thread library and the kernel cooperate to schedule user threads. A pool of user-level threads are mapped to a pool of kernel-level threads. The user-level thread is not bound to a single kernel-level but different ones at different times. The library can select multiple threads that are runnable from a single process. Those threads are then assigned to the available kernel-level threads in the pool. In this model, there may be as many kernel-level threads as there are cores or CPUs. For user-level threads that are I/O intensive, a kernel-level thread is not idle, waiting on CPU activity from user thread. So kernel threads are created according to what is suitable or required by the programs' execution behavior. But is does has a disadvantage. This hybrid model makes it difficult to see how user thread relate to the kernel threads that are executing.