Skip to main content Link Menu Expand (external link) Document Search Copy Copied

CS 326 Lecture/Lab Meeting Summary – Spring 2025

Date & Time: April 22, 2025, 02:54 PM Pacific Time
Meeting ID: 813 3845 7711

This document summarizes the key points discussed during the session, including scheduling algorithms, kernel evolution for multiprocessors, lock handling, virtual memory concepts, and more.


Overview

The session covered the following major topics:

  • Scheduling algorithms: Implementation and considerations for standard, lottery, and stride scheduling.
  • Kernel evolution: How the Linux kernel evolved to support multiprocessors.
  • Process state transitions: Changing a process’s state from running to runnable and invoking the scheduler.
  • Locking mechanisms: Detailed discussion on spin locks, fine-grain locks, and their limitations.
  • Virtual memory: Concepts, page table management, and mapping virtual addresses to physical memory.

Quick Recap

  • Implementation of various scheduling algorithms and the process of switching a thread’s state.
  • Evolution of the Linux kernel from a single-CPU design to supporting multiprocessor systems.
  • Explanation of spin locks, their use in multiprocessor systems, and limitations in user-level processes.
  • Introduction of virtual memory and the mapping between virtual addresses and physical RAM.
  • Addressing lottery scheduling challenges and the importance of demonstrating a working test program.

Next Steps

  • Project 3 Implementation:
    • Implement lottery and stride scheduling algorithms.
    • Demonstrate working test programs in the lab.
    • Increase the number of ticks in test programs (from 50 to 100 or 200) to improve statistical accuracy.
    • Experiment with different pseudo-random number generators for improved randomness in lottery scheduling.
  • Practice Problems:
    • Review the practice problems covering lab topics since the midterm (to be posted by the instructor).
  • Virtual Memory Study:
    • Study virtual memory concepts in preparation for the next project.
  • Future Project Details:
    • Upcoming project will involve implementing shared memory between processes.

Detailed Summary

1. Scheduling Algorithms and Ticket Distribution

Greg explained the need to implement different scheduling algorithms in the operating system. The key points include:

  • Separate Scheduler Functions:
    Different functions should be created for each scheduler type (standard, lottery, stride). A conditional mechanism can select the appropriate scheduler mode. For example:

    switch (scheduler_mode) {
        case SCHED_STANDARD:
            standard_scheduler();
            break;
        case SCHED_LOTTERY:
            lottery_scheduler();
            break;
        case SCHED_STRIDE:
            stride_scheduler();
            break;
    }
    
  • Testing Environment:
    The current testing setup uses two processes with fixed shares. In a real scenario, managing ticket distribution for lottery scheduling would be more complex to maintain proportional shares.

  • Logging Challenges:
    Logging within the scheduler is challenging. Students should review earlier discussions to better understand debugging and logging in this context.


2. Linux Kernel Evolution for Multiprocessors

The lecture reviewed how the Linux kernel evolved to support multiprocessors:

  • Initial Single-CPU Design:
    Early versions of Linux could only run on one CPU.

  • Big Kernel Lock:
    Multiprocessor support was initially achieved using a big kernel lock, which limited parallelism by allowing only one process in the kernel at a time.

  • Fine-Grain Locking:
    The kernel evolved to use fine-grain locks to protect specific data regions, allowing multiple processes to execute concurrently. Every process has its own lock that must be acquired prior to accessing its data.

Below is an example from xv6-riscv that demonstrates a simple scheduler loop with locks:

void
scheduler(void)
{
    struct proc *p;
    for (;;) {
        sti();  // Enable interrupts.
        // Iterate through the process table.
        for (p = proc; p < &proc[NPROC]; p++) {
            acquire(&ptable.lock);
            if (p->state != RUNNABLE) {
                release(&ptable.lock);
                continue;
            }
            // Set process state to RUNNING and switch context.
            p->state = RUNNING;
            swtch(&cpu->context, p->context);
            release(&ptable.lock);
        }
    }
}

Mermaid Diagram: Kernel Locking Evolution

flowchart TD
    A[Single CPU Kernel]
    B[Multiprocessor with Big Kernel Lock]
    C[Fine-Grain Locking]
    
    A --> B
    B --> C

3. Lock Handling in Lottery Scheduler

Key points regarding lock handling in the lottery scheduler include:

  • State Transition:
    The transition from a running state to a runnable state involves context switching and releasing locks.

  • Consistency Check:
    Students should compare the lock handling in the standard scheduler with that in the lottery scheduler to ensure consistency.

  • Debugging Advice:
    It is recommended to run the system in debug mode and add detailed log messages to isolate and resolve lock-related errors.


4. Kernel Mode Switching Process

The kernel mode switching process was described as follows:

  • User vs. Kernel Space:
    When a process runs in user space, it does not access the kernel’s process structures and hence does not need a lock.

  • Timer Interrupts:
    When a timer interrupt occurs, the process is switched to kernel mode, where a lock is acquired to modify the process structure, ensuring its integrity. The lock is released before returning to user mode.

Mermaid Diagram: Kernel Mode Switching

flowchart LR
    U[User Mode]
    I[Timer Interrupt]
    K[Kernel Mode]
    R[Return to User Mode]

    U --> I
    I --> K
    K --> R

5. CPU Structure and Interrupt Management

The discussion on CPU management emphasized:

  • Current Process Pointer:
    Each CPU structure holds a pointer to the currently running process, facilitating efficient scheduling.

  • Interrupt Management:
    Minimizing the duration for which interrupts are disabled is crucial to ensure that critical events are not missed.

  • Spin Locks Usage:
    Spin locks are used for synchronization between processors, although care must be taken not to disable interrupts for excessive durations.


6. Spin Locks and Multiprocessor Limitations

The limitations of spin locks in a multiprocessor environment were discussed:

  • CPU Cycle Waste:
    Continuous spinning can waste valuable CPU cycles, especially if the process is waiting for a lock.

  • User-Level Limitation:
    In user-level processes, shared memory is typically not available, making spin locks less effective.

  • Alternative Mechanisms:
    Alternatives such as sleep blocks might be more suitable for longer waits.

  • OS Lock Usage:
    Most locks in the operating system are related to file systems and device management, with fewer locks for process management.


7. Improving Test Program and Lottery Scheduling

The conversation also focused on strategies to improve project test programs:

  • Test Program Demonstration:
    Students must demonstrate a working test program for Project 3.

  • Statistical Accuracy:
    Increasing the number of ticks in test programs (e.g., from 50 to 100 or 200) can result in more reliable statistics.

  • Randomness Exploration:
    Consider implementing different pseudo-random number generators to enhance lottery scheduling behavior.

  • Debugging with GDB:
    Using GDB is advised to pinpoint locations where the system might be stalling or encountering errors.


8. Virtual Memory and Page Tables

Virtual memory was introduced as a critical concept:

  • Hardware and Kernel Cooperation:
    Virtual memory relies on both the computer’s hardware and the operating system kernel to manage memory effectively.

  • Page Mapping:
    Virtual addresses are mapped to physical addresses in fixed 4K pages.

  • Page Tables:
    Each process maintains its own page table to translate virtual addresses into physical memory addresses. In systems using 32-bit addresses, this mapping is essential.


9. Virtual Address Space Mapping

Further elaboration on memory mapping included:

  • Per-Process Page Tables:
    Every process utilizes its own page table for virtual to physical address translation.

  • Mapping Function:
    A mapping function takes a page table and a page number as input to generate the corresponding page frame number.

  • Shared Memory:
    The same mapping infrastructure can be extended to establish shared memory between processes.


Conclusion

The session covered several essential concepts ranging from scheduler design and kernel locking mechanisms to virtual memory management. The next steps involve implementing and testing lottery and stride scheduling in Project 3, with additional focus on virtual memory and shared memory in future projects. Students are encouraged to review distributed practice problems and lecture notes for further clarification.

For any follow-up questions or additional assistance, students should consult their course materials or reach out to the instructor.