CS 326 Lecture/Lab Spring 2025 Meeting Summary
Date: April 16, 2025, 06:32 PM Pacific Time (US and Canada)
Meeting ID: 813 3845 7711
Quick Recap
- Project 3 Framework:
- Focus on scheduling and process management.
- Introduction of a shared memory region for inter-process communication.
- Emphasis on keeping code modifications reasonable.
- Discussion on using coding assistance and a simple container implementation by semester’s end.
- Containers & Cloud Technologies:
- Overview of Docker containers and Kubernetes.
- Explanation of containers functioning as isolated operating system copies.
- Differentiation between emulation (slow, educational), virtualization (efficient, host-supported), and containers (lightweight).
- System Calls & Scheduling Techniques:
- Introduction of new system calls:
set_scheduler
,set_prop_share
, andsname
. - Explanation of proportional share (stride) and lottery scheduling.
- Challenges highlighted with non-deterministic language models in coding tasks.
- Introduction of new system calls:
Next Steps
- Project Specification & Scheduling Enhancements:
- Greg will review and finalize the Project 3 specification focusing on scheduling.
- In the next class, more details on the stride scheduling algorithm will be provided, including correcting the example table.
- Implementation Choices:
- Students are to choose their preferred method for Project 3: manual coding, using CS Tutor, or AI assistance.
- Review the provided test code for Project 3 to understand process share setting and CPU measurement.
- System Call & Share Mapping:
- Greg will discuss how share values map to stride constants in the upcoming lecture.
Detailed Summary
1. Project 3 Framework and Process Management
- Key Points:
- The project will focus on scheduling, process management, and introducing a shared memory region for process communication.
- The scope of code changes is limited to maintain project manageability.
- Potential inclusion of a basic container system by the semester’s end.
xv6–riscv Code Example: Adding a New System Call
The following snippet illustrates adding a new system call (e.g.,sname
for changing a process name) in xv6–riscv:// sysproc.c uint64 sys_sname(void) { char *newname; if(argstr(0, &newname) < 0) return -1; safestrcpy(proc->name, newname, sizeof(proc->name)); return 0; }
2. Docker Containers and Kubernetes Overview
- Concepts Covered:
- Docker and Kubernetes enable the deployment of multiple containers on a cluster of physical machines.
- Containers provide isolated environments akin to separate operating system instances.
- Emulation, virtualization, and containers each have distinct trade-offs regarding performance and use cases.
Visual Representation: Container Ecosystem
graph TD A[Physical Machine] --> B[Docker Engine] B --> C[Container 1] B --> D[Container 2] B --> E[Container 3] C --> F[Kubernetes Management] D --> F E --> F
3. Virtualization vs. Emulation: Key Differences
- Virtualization:
- Runs operating systems supported by the host architecture.
- Provides near-native performance by directly interfacing with hardware.
- Acts as an extension of kernel control, trapping specific actions of the guest OS.
- Emulation:
- Interprets all instructions via software.
- Offers flexibility but at the cost of performance.
- Mainly used in educational contexts rather than production environments.
4. Emulation, Virtualization, and Container Differences
- Comparison:
- Emulation:
- Maximum flexibility by emulating everything within a single process.
- Slow performance.
- Virtualization:
- Cooperative interaction between hardware and software.
- Better performance compared to emulation.
- Containers:
- Efficient solution with minimal overhead.
- Limited by requiring similar host and guest operating system architectures.
- Emulation:
Visual Representation: Comparison Diagram
flowchart LR A[Emulation] -->|High Flexibility| B[Virtualization] B -->|Efficient Hardware Access| C[Containers] A -->|Slow Performance| D[Not ideal for Cloud Deployments] C -->|Limited OS Support| D
5. Virtualization vs. Containers: Curriculum Considerations
- Discussion Points:
- Virtualization can run any OS on any host with proper support.
- Containers require the same OS and architecture on host and guest.
- Containers are useful for creating controlled environments with set constraints (memory, disk, network).
- Early use of containers in curriculum helps in normalizing environment setups and mitigating configuration issues.
6. Non-Deterministic Language Models in Coding
- Challenges:
- Non-deterministic outputs from similar prompts can lead to different code generation results.
- Previous teaching approaches moved from user-level to kernel-level progressively.
- The proposed method involves using a coding system to systematically modify code and learn from the process.
- There remains a challenge in ensuring consistent output despite different inputs.
7. Exploring Schedulers: Proportional Share & Lottery
- Scheduling Techniques:
- Proportional Share (Stride Scheduling):
- Each process is allocated CPU shares proportionally.
Not based on absolute CPU percentage but on assigned shares.
xv6–riscv Code Example: Calculating Stride Value
#define BIG_STRIDE 10000 // Calculate the stride value for a given share uint64 calc_stride(int share) { return BIG_STRIDE / share; }
- Lottery Scheduling:
- Processes are given a number of tickets.
A random ticket is drawn to decide which process runs next, ensuring fairness.
Mermaid Diagram: Lottery Scheduler Process
flowchart TD A[Start Scheduling Loop] --> B{Any Runnable Processes?} B -- Yes --> C[Calculate Sum of Tickets] C --> D[Generate Random Number] D --> E[Select Process Winning the Lottery] E --> F[Execute Selected Process] F --> A B -- No --> G[System Idle]
- Proportional Share (Stride Scheduling):
8. New System Calls for Schedulers
- Key Changes Discussed:
- Implementation of new system calls such as:
sname
– to change the process name.set_scheduler
– to change the scheduler mode.set_prop_share
– to adjust the process share for proportional scheduling.
- Implementation of new system calls such as:
- Purpose:
- These system calls are essential for supporting the new scheduling algorithms in Project 3.
9. Kernel Scheduling Algorithms Implementation
- Implementation Overview:
- The standard scheduler is implemented within an infinite loop.
- It is suggested to modularize the code by creating separate functions for each scheduling mode:
- Standard Scheduling
- Lottery Scheduling
- Stride Scheduling
- For lottery scheduling, the process generally involves:
- Finding all runnable processes.
- Summing their tickets.
- Selecting one based on a random draw.
- Test Case Review:
- A test case demonstrates how multiple processes are forked and assigned shares.
- Focus on understanding how system calls integrate with the scheduling logic.
10. Stride Scheduling Algorithm Explanation
- Concept Details:
- Stride scheduling allocates CPU time based on assigned shares rather than absolute CPU percentage.
- The algorithm maintains a “pass” value for each process and selects the process with the minimum pass value.
- After running, the process’s pass value is increased by its stride.
Mermaid Diagram: Stride Scheduling Flow
flowchart TD A[Initialize Processes with Stride and Pass Values] --> B[Find Process with Minimum Pass Value] B --> C[Run Selected Process] C --> D[Increment Process Pass Value by its Stride] D --> B
- Clarification:
- The process does not check if a specific percentage of CPU is achieved.
- Instead, it systematically maintains and updates process values to reflect allocated CPU shares.
Conclusion
The meeting provided a comprehensive overview of Project 3’s framework, including detailed discussions on scheduling algorithms (both proportional and lottery), container technologies, and the necessary new system calls within the kernel. Next steps involve refining the project specifications, deepening the discussion on stride scheduling, and ensuring students are equipped to implement the scheduling algorithms using provided references and sample code.
Students and instructors alike are encouraged to review the provided code snippets and diagrams, as they are instrumental in bridging theoretical concepts with concrete xv6–riscv implementation examples.