Posts

CST334 - Week 7

Week 7 Reflection This week, we covered a lot about persistence, and how the OS interacts with files, directories, and I/O devices.  I/O devices can range from memory buses to hard drives to keyboards. For a canonical I/O device, the canonical protocol is: While (STATUS == BUSY) ; // wait until device is not busy (polling the device) Write data to DATA register Write command to COMMAND register (Doing so starts the device and executes the command) While (STATUS == BUSY) ; // wait until device is done with your request  The basic protocol to interact with an IDE disk is: Wait for drive to be ready (polling). Write parameters to command registers. Start the I/O. Data transfer. Handle interrupts. Error handling. I/O time, or T I/O , is calculated by the seek delay + rotational delay + transfer delay. In other words,  T I/O  = T seek  +  T rotation +  T transfer The rate of I/O is the size of the transfer divided by the I/O time, or Size transfer  /...

CST334 - Week 6

Week 6 Reflection This week, we learned more about semaphores, and the bugs that are commonly faced in concurrency, such as deadlocks.  Semaphores are a synchronization primitive that has a modifiable integer value. The behavior of the semaphore is determined by this value. They can be used as both locks and condition variables for concurrency.  sem_wait() and sem_post() are used to interact with a semaphore. sem_wait() waits for a condition to be met, while sem_post() increments the value of the semaphore, then wakes a waiting thread. If a semaphore's value is negative, this number is equal to the number of waiting threads. For example, -4 equals four waiting threads, incrementing by 1 will change it to -3, waking one thread and now resulting in three waiting threads. Concurrency creates bugs that are difficult or costly to fix correctly. Non-deadlock bugs include atomicity violations and order violations. Atomicity violations occur when instructions that sho...

CST334 - Week 5

Week 5 Reflection      In this week, I learned a great deal about concurrency, threads, and best practices on how to keep threads managed efficiently. A thread is seen as a process that can share the same address space as another thread, and access the same data, rather than needing to occupy a different address space. Multi-threading is handling multiple threads in the same address space using thread control blocks (TCBs) to store thread states. For multi-threaded processes, each thread has its own stack, called the thread-local storage.       In the aside, four key terms are highlighted:  critical section: Code that accesses a shared resource, like a data structure or variable.  race condition (aka data race): If multiple threads enter the critical section at the same time, a race condition occurs. These threads could be executed in any order, leading to the program becoming indeterminate. indeterminate program: A program with one or more r...

CST334 - Week 4

Week 4 Reflection Paging - Virtual address --> Physical address Find the VPN bits (first X bits of a virtual address). Find the offset bits of the virtual address, which are the last Y bits (or, the virtual address with leading VPN bits removed). Find the corresponding PTE (if available). If the most significant bit (MSB) is a 1, it is a valid address. If 0, it is not valid and a physical address cannot be obtained from this virtual address. Find the PFN, which is the last Y bits of the PTE. If one is not listed, the number of pages a physical address can hold determines the PFN bits using the formula: [log2(number of pages)]. Then, combine the PFN and offset in this order to obtain the translated physical address. Swapping Swapping makes use of a few policies to determine what pages to swap in memory: The optimal policy (aka Belady or MIN) replaces pages that are accessed furthest in the future to maximize cache hit rates. This is not very practical because implementing pr...

CST334 - Week 3

Week 3 Reflection We learned about virtual memory and how to translate it into physical memory.        The two memory types that we use in our programs is stack memory and heap memory. The stack is managed mostly by the compiler while the heap is long-lived and only allocated/deallocated by the programmer. The heap is allocated near the base of the address space (0) and grows outward, while the stack is allocated at the end of the address space (size_of) and grows downward towards the heap/program code.We used two methods of memory address translation - base and bounds (aka dynamic relocation), and paging.        Base and bounds uses two registers: the base register and the bound (aka limit) register. With these two registers, a program can be translated to physical memory using its virtual address + this base register. However, the free memory between the stack and heap is wasteful and prevents other larger address spaces to fit in desp...

CST334 - Week 2

Weekly Reflection, Week 2 I feel a lot more confident in my understanding of C now! From what we've covered in class: CPU scheduling is important to optimizing processing times. Scheduling makes use of two metrics: turnaround time and response time. Turnaround time is how long it takes for a process to complete from when it first entered the schedule. Response time is how long it takes for a process to begin its first run from when it first entered the schedule. The main schedule patterns we've seen are First In, First Out (FIFO and its related schedule Last in First Out or LIFO), Shortest Job First (SJF), Shortest Time-to-Completion First (STCF), and Round Robin (RR). FIFO chooses the first process entered and completes it, then to the second-most process entered, etc. LIFO chooses the last entered process (the tail end) and completes it, then to the second-last process entered, etc. SJF prioritizes jobs with the shortest duration and completes them first.  STCF prioritizes jo...

CST334 - Week 1

This Week's Reflection      In this class, we discussed about the basics of operating systems - how they work, why they exist, and what they do differently depending on the task at hand. We discuss memory, hardware and software integrations, user mode vs. kernel mode, and what an operating system abstracts to the user.      The main operation that the OS handles is abstracting multiple processes at once to do work for the user. The OS performs this by virtualizing the CPU via time sharing. The rules that govern how the OS handles time sharing are called policies.      I had trouble remembering what the * and & operators do in C. I get confused where they are used and what differentiates them from each other, like a struct* var, a struct var, and struct *var. Using * before a variable (like *var) is a dereference. When initializing a pointer, use struct *var. This creates a variable of type struct *. The & operator refers to an address...