Skip to main content Link Menu Expand (external link) Document Search Copy Copied

CS 326 Lecture/Lab Meeting Summary – Spring 2025

Date: April 15, 2025
Time: 03:06 PM Pacific Time (US and Canada)
Meeting ID: 813 3845 7711


Quick Recap

During the session, several topics were addressed:

  • Assignment Management:
    The process for accepting and managing assignments on the designated platform was discussed, including cloning assignments locally and generating test cases.

  • Coding Assistants & Model Efficiency:
    Different coding assistants (Claude 3.7, Open AI 4.1, Gemini 2.5) were compared. The discussion focused on how input costs are lower than output costs, and how modifying the input can reduce output expenses. Emphasis was placed on keeping software up to date and experimenting with various models.

  • Scheduling Techniques:
    Fundamental scheduling algorithms, including round-robin, lottery, and stride scheduling, were explained. The challenges with Gemini and XD6 schedulers, as well as system issues and potential workarounds such as gaming the scheduler by splitting tasks into multiple processes, were also discussed.

  • Proportional Share Scheduling:
    The session introduced the concept of allocating CPU shares proportionally among processes, with a focus on lottery scheduling and its deterministic variant—stride scheduling.


Next Steps

For Students

  • Follow the Coding Exercise:
    Engage with the vibe coding exercise to implement lottery scheduling in xv6.

  • Implement Scheduling Algorithms:
    Develop both lottery and stride scheduling in xv6 as part of the upcoming project.

  • Concept Review:
    Review proportional share scheduling, including both lottery and stride algorithms, to reinforce the learning outcomes.

  • Troubleshoot AI Assistants:
    Address any issues encountered with AI assistants and retry the exercise as needed.

For the Professor

  • Project Specification:
    Distribute a detailed project specification for the next week’s assignment.

  • Continued Demonstration:
    Provide further explanations and demonstrations related to proportional share scheduling in the next class.


Detailed Discussion

1. Accepting and Managing Assignments Process

Greg explained the process of assignment management on the course platform. Key points included:

  • Assignment Acceptance:
    Students must accept and clone assignments to their local systems.

  • Test Cases:
    Students will generate and inspect their own test cases to ensure correct functionality.

  • Troubleshooting:
    Early system issues were encountered and resolved, underscoring the need for vigilance during the setup.


2. Coding Assistants and Model Efficiency

The discussion covered how coding assistants optimize the coding workflow:

  • Input vs. Output Costs:
    The input is inexpensive, while output costs are higher. Modifying input formats can reduce output expenses.

  • Toolchain Setup:
    Instructions were given on setting up different models (Claude 3.7, Open AI 4.1, Gemini 2.5) in VS Code.

  • Software Updates:
    Keeping the coding tools and models updated is essential to maintain efficiency and accuracy.


3. Round-Robin Scheduling Process

Round-robin scheduling was presented as follows:

  • Mechanism:
    The scheduler runs on every CPU and iterates over the process table to select the next runnable process.

  • Example:
    With five runnable processes, the scheduler cycles through them, ensuring fairness in CPU time distribution.

Mermaid Diagram: Round-Robin Scheduling

flowchart TD
    A[Scheduler] --> B[Process Table]
    B --> C[Process 1]
    B --> D[Process 2]
    B --> E[Process 3]
    B --> F[Process 4]
    B --> G[Process 5]
    C --> H[Execute Process 1]
    D --> H[Execute Process 2]
    E --> H[Execute Process 3]
    F --> H[Execute Process 4]
    G --> H[Execute Process 5]
    H --> A[Return to Scheduler]

4. Switching Between System Processes

The session detailed how the system manages context switching:

  • Context Switch Mechanism:
    The system iterates over the process table, saves the current process state (registers, stack pointer), and resumes execution when the process returns from the scheduler.

  • State Preservation:
    When a process yields or receives a timer interrupt, its state is stored to allow precise resumption.


5. Process Forking and Scheduler Mechanism

Key points regarding process management included:

  • Forking:
    A new child process is created from a parent process. The scheduler finds an unused slot in the process table for the new process.

  • Scheduler Return:
    A process switches back to the scheduler only when a system call or timer interrupt occurs.

xv6 Process Forking (C Code Example)

// xv6-riscv process fork implementation snippet
int
fork(void)
{
  int i, pid;
  struct proc *np;
  
  // Allocate a new process structure.
  if((np = allocproc()) == 0)
    return -1;

  // Copy file descriptors and current working directory.
  for(i = 0; i < NOFILE; i++){
    if(proc->ofile[i])
      np->ofile[i] = filedup(proc->ofile[i]);
  }
  np->cwd = idup(proc->cwd);
  
  pid = np->pid;
  safestrcpy(np->name, proc->name, sizeof(proc->name));
  
  // Copy the parent's address space.
  copyuvm(proc->pagetable, np->pagetable, proc->sz);
  np->sz = proc->sz;
  np->parent = proc;
  
  acquire(&np->lock);
  np->state = RUNNABLE;
  release(&np->lock);
  
  return pid;
}

6. Challenges with Gemini and XD6 Schedulers

Greg shared his difficulties with alternative scheduler implementations:

  • Diagram Generation and Stability:
    Issues arose when generating diagrams and during system instability.
  • System Mode Switching:
    The system unexpectedly switched modes in an attempt to generate a Mermaid diagram.
  • Need for Frequent Updates:
    Ongoing frustrations highlighted the necessity of keeping systems updated to handle such challenges.

7. Round-Robin Scheduling and Priority Inversion

The discussion highlighted:

  • Fairness of Round-Robin:
    In an idealistic scenario, each process receives equal CPU time.
  • Priority Inversion:
    A lower priority process may block a higher priority process, creating a scheduling conflict.
  • Historical Context:
    Early Unix systems used a hybrid approach with “nice” levels, but later improvements addressed these issues.
  • Proposed Solutions:
    A potential solution was mentioned without detailed explanation.

8. Scheduling in Unix Machines

Greg further elaborated on scheduling challenges in multi-user systems:

  • Resource Competition:
    Users logging into Unix machines may inadvertently compete for CPU resources.
  • Gaming the Scheduler:
    Dividing work into multiple processes can artificially increase the CPU time available to a user.
  • Proposed Allocation:
    A concept was proposed for assigning a fixed CPU share per user, ensuring fair resource distribution regardless of process count.

9. Proportional Share Scheduling in xv6

The central part of the lecture focused on advanced scheduling strategies:

  • Lottery Scheduling:
    Processes are assigned tickets proportional to their desired share of CPU time. At each scheduling decision point, a ticket is drawn at random, favoring processes with a higher number of tickets.

  • Implementation in xv6:
    A new system call was proposed to toggle between standard and proportional share scheduling modes. A user-level test program would demonstrate the effectiveness of these strategies.

xv6 Lottery Scheduler (C Code Example)

#include "types.h"
#include "defs.h"
#include "param.h"
#include "mmu.h"
#include "proc.h"
#include "spinlock.h"

// Lottery scheduling implementation in xv6
int lottery_scheduler(void) {
  struct proc *p;
  int total_tickets = 0;

  // Compute the total number of tickets for all runnable processes.
  for(p = ptable.proc; p < &ptable.proc[NPROC]; p++){
    if(p->state == RUNNABLE)
      total_tickets += p->tickets;
  }
  
  // Draw a winning ticket.
  int winning_ticket = random_at_most(total_tickets - 1);
  
  // Select the process based on ticket distribution.
  for(p = ptable.proc; p < &ptable.proc[NPROC]; p++){
    if(p->state == RUNNABLE){
      winning_ticket -= p->tickets;
      if(winning_ticket < 0)
        return p->pid;
    }
  }
  
  return 0;
}

10. Scheduling Variations and Stride Scheduling

Additional scheduling mechanisms were introduced:

  • Stride Scheduling:
    A deterministic alternative to lottery scheduling, offering more predictable CPU time allocation.
  • Mode Switching:
    The possibility of switching scheduling methods at runtime was discussed, allowing xv6 to start in standard mode before transitioning to lottery or stride scheduling.
  • Weight Normalization:
    The idea of normalizing process weights (summing to 100) was suggested to simplify CPU time allocation.

11. Implementing a CPU Scheduling Solution

The session concluded with a review of the overall CPU scheduling solution:

  • Algorithm Comparison:
    Lottery scheduling (a probabilistic method) was compared with stride scheduling (a deterministic method), with a preference for the simpler lottery approach.
  • Project Outlook:
    The upcoming project would require students to implement these scheduling strategies while keeping the solution straightforward for educational purposes.
  • Future Work:
    Implementation challenges will be addressed in subsequent classes, with further demonstrations slated for future sessions.

Conclusion

The lecture provided a comprehensive overview of assignment management and CPU scheduling techniques, especially within xv6. Students received guidance on practical coding exercises and scheduling algorithm implementation, while the professor outlined future steps and project specifications.

This session set the stage for hands-on projects that will emphasize proportional share scheduling through both lottery and stride methods. Participants are encouraged to review the discussed concepts and prepare for further experimentation in upcoming classes.