Posts

Showing posts from December, 2025

CST334 - Week 7

Week 7 Reflection This week, we covered a lot about persistence, and how the OS interacts with files, directories, and I/O devices.  I/O devices can range from memory buses to hard drives to keyboards. For a canonical I/O device, the canonical protocol is: While (STATUS == BUSY) ; // wait until device is not busy (polling the device) Write data to DATA register Write command to COMMAND register (Doing so starts the device and executes the command) While (STATUS == BUSY) ; // wait until device is done with your request  The basic protocol to interact with an IDE disk is: Wait for drive to be ready (polling). Write parameters to command registers. Start the I/O. Data transfer. Handle interrupts. Error handling. I/O time, or T I/O , is calculated by the seek delay + rotational delay + transfer delay. In other words,  T I/O  = T seek  +  T rotation +  T transfer The rate of I/O is the size of the transfer divided by the I/O time, or Size transfer  /...

CST334 - Week 6

Week 6 Reflection This week, we learned more about semaphores, and the bugs that are commonly faced in concurrency, such as deadlocks.  Semaphores are a synchronization primitive that has a modifiable integer value. The behavior of the semaphore is determined by this value. They can be used as both locks and condition variables for concurrency.  sem_wait() and sem_post() are used to interact with a semaphore. sem_wait() waits for a condition to be met, while sem_post() increments the value of the semaphore, then wakes a waiting thread. If a semaphore's value is negative, this number is equal to the number of waiting threads. For example, -4 equals four waiting threads, incrementing by 1 will change it to -3, waking one thread and now resulting in three waiting threads. Concurrency creates bugs that are difficult or costly to fix correctly. Non-deadlock bugs include atomicity violations and order violations. Atomicity violations occur when instructions that sho...

CST334 - Week 5

Week 5 Reflection      In this week, I learned a great deal about concurrency, threads, and best practices on how to keep threads managed efficiently. A thread is seen as a process that can share the same address space as another thread, and access the same data, rather than needing to occupy a different address space. Multi-threading is handling multiple threads in the same address space using thread control blocks (TCBs) to store thread states. For multi-threaded processes, each thread has its own stack, called the thread-local storage.       In the aside, four key terms are highlighted:  critical section: Code that accesses a shared resource, like a data structure or variable.  race condition (aka data race): If multiple threads enter the critical section at the same time, a race condition occurs. These threads could be executed in any order, leading to the program becoming indeterminate. indeterminate program: A program with one or more r...