Posts

Operating Systems: Last Week

Throughout this course, I have learned several new concepts on how computers work ‘behind the scenes’. Every week, I was introduced to eye-opening and sometimes confusing material. Although when I kept reading and going over the videos provided to me, everything was making more sense. I spend several hours throughout the week viewing and reviewing the material to fully understand it. It was a necessary part of each week. One topic that I found interesting and useful was paging and threads. Both of these concepts have helped me understand how multitasking and memory are handled in operating systems. Paging is useful to help improve memory efficiency and also reduce fragmentation. Threads are also a better way to make a program more efficient and responsive. This made me understand how larger programs are able to function efficiently. On the other hand, I found process scheduling to be one of the more challenging topics. It took me a while to fully know each of the IO schedulers. The one...

Operating Systems: Week 7

 During this week, we have gone through the last chapters of this course. We went over IO devices, hard drives, files, and directories, as well as the file systems. When I learned about chapter 36 on IO devices, we went over how the OS actually talks to the devices. We reviews the computer architecture and looked at the PC architecture. There are two I/O devices, which are block devices and character devices, also known as stream devices. The block devices are hard drives, and the character devices are printers, keyboards, and mouses. There are three different ways that the CPU interacts with IO, and those are polling, interrupts, and direct memory access. Polling is better when we have a really low latency. Interrupt is better when there is a need for a little bit of data intermittently. Direct memory access is better when there is an abundance of data that we want to move. In chapter 37, when we went over hard drives, I learned more about them. I also learned how to calculate the...

Operating Systems: Week 6

During this week, we have learned about several objectives, including condition variables, semaphores, and common concurrency problems. A condition variable is an explicit queue where threads are able to wait for a condition and then signal other threads when the condition might be true. Condition variables work in company with mutex to help stay away from race conditions. I have learned through the readings as well as the many detailed examples given in the videos provided to us by the professor. Here are examples of condition variable usage: pthread_cond_wait(cond, lock), pthread_cond_signal(cond), and pthread_cond_broadcast(cond). An important part of condition variables is to use while loops and not an if statement. The reason for this is that if we use an if statement, we can cause the condition to change while the thread is asleep; while loops are a safer option. I also learned about semaphores and how a semaphore can be tricky to work with, but can also help coordinate access to...

Operating Systems: Week 5

During this week, we have learned about concurrency, thread API, Mutexes, and lock-based data structures. Concurrency can be used to use multiple processes, and we can help processes interact by using threads. A thread is a unit of execution that is in a process, and each thread has its own stack and registers. Usually, a process has one thread to begin with, but creating a new thread in the process is faster than creating a new process. I was introduced to a multi-threaded program that has more than one thread in a process.  Concurrency’s benefits are the ability to run tasks “at the same time”, having a useful programming abstraction, and having leverage on multicore machines as well as GPUs. The concepts that are key in concurrency are critical sections, race conditions, and mutual exclusion. We use locks to help us in multi-thread programming. When implementing a lock, we evaluate the correctness, fairness, and performance, as these are important parts of implementing it. ...

Operating Systems: Week 4

 The paging part of this week was actually very enjoyable. I think this is because I was able to better understand the material by giving myself more time to go through it all. I enjoyed the practice problems given to us in the videos, book, and labs. During this week, we have gone through more information about what memory consists of. Throughout this week, we learned about paging, Translation Lookaside Buffers (TLB), multi-level paging, and swapping.  Paging breaks up space in virtual address into equal sections of size. This method avoids fragmentation and is flexible. Each process will have its own page table where virtual addresses translate into physical addresses. The virtual address has its virtual page number and offset. I learned about the address translation being mapped to a page frame number as I saw examples of it on the videos as well as in the book.  A bad part of paging is that address translations are slow, and the page tables are too big. This is where ...

Operating Systems: Week 3

 During this week, we have gone into depth about the memory of a program. I read chapters 13 - 17, which covered address spaces, C Memory API, Address Translation & Base-and bounds, Segmentation, and Free Space management. I learned that an address space refers to the range of memory addresses that a process can use. When a program is compiled into memory addresses and we have goals when dealing with memory. The goals are to have transparency, efficiency, and protection. With these goals, we have some challenges that can occur, which we can address with our solution of using virtual memory. This week, I have also learned how to use malloc() and free(), and the many mistakes that can be avoided.  Address translation uses the virtual addresses to determine if it's invalid (which will lead to a trap) or valid which will map it to physical memory. I also learned about base-and-bounds and how each process had these values. Additionally, I learned the importance of segmentation....

Operating Systems: Week 2

     During this past week in this class, I have learned about processes, limited data execution, CPU scheduling, and multi-level feedback queue (MLFQ). During chapter 4 of the book as well as the video provided to us, I have learned about processes, virtualization, context switches, process state, and scheduling policy. A process is a running program that has memory and a thread of execution. Multiple processes can run the same program at once. On the other hand, virtualization involves splitting system resources to support different environments or demands. I also learned that the OS needs to manage the state of a process. Processes are able to be ready to run or are running. During this, the I/O can request to block the process and then send it back to ready when it is ready.      Throughout this week, I also learned and engaged in the process scheduling and its metrics. There are two metrics that we practiced with and these are the Turn Around Time (TAT...