Process Management
First I want make some statements that will help you to understnd the content swiftly.
Execution Modes vs. Threads
1. Execution Modes
- Kernel Mode: A privileged CPU state where code has unrestricted access to hardware and memory. This is where the operating system’s core functions execute.
- User Mode: A restricted state where applications run. Memory access and hardware operations are limited to prevent system instability.
- The Connection: When a program needs a system service (like file I/O), its executing thread triggers a transition from User Mode to Kernel Mode via a system call. The thread doesn’t “move” but rather executes privileged kernel code during this transition.
2. Thread Management Models
Kernel-Level Threads (KLTs)
- Managed entirely by the operating system kernel
- The OS sees and schedules them across CPU cores
- Each KLT can be individually scheduled for parallel execution
- Thread creation, switching, and synchronization involve system calls (slower but more capable)
User-Level Threads (ULTs)
- Managed by a user-space threading library within the application
- The OS is unaware of them and sees only the containing process
- Multiple ULTs map to one or more underlying KLTs
- Advantages: Extremely fast creation/context switching (10-100Ă— faster), flexible scheduling policies
- Disadvantages: Cannot utilize multiple cores simultaneously unless mapped to multiple KLTs, blocking system calls affect all ULTs in the group
3. Threading Models in Practice
1:1 Model (One ULT per KLT)
- Used by: Windows, Linux (via NPTL), Rust standard library, Java, C#
- Advantages: Simplicity, true parallelism, no blocking issues
- Disadvantages: Higher overhead per thread
M:N Model (Many ULTs to Many KLTs)
- Used by: Go (goroutines), Erlang/Elixir (BEAM processes), early versions of Java’s “green threads”
- Advantages: Highly efficient for massive concurrency (thousands of “threads”), excellent for I/O-bound workloads
- Complexities: Requires sophisticated user-space schedulers, careful handling of blocking operations
Practical Implications
- System Programming: Often uses 1:1 for predictability and control
- Web Servers/Concurrent Apps: Often use M:N for scalability with many connections
- Modern Trend: Async/await with thread pools combines the benefits of both models. 1
start :)