Operating Systems: Week 5
During this week, we have learned about concurrency, thread API, Mutexes, and lock-based data structures. Concurrency can be used to use multiple processes, and we can help processes interact by using threads. A thread is a unit of execution that is in a process, and each thread has its own stack and registers. Usually, a process has one thread to begin with, but creating a new thread in the process is faster than creating a new process. I was introduced to a multi-threaded program that has more than one thread in a process.
Concurrency’s benefits are the ability to run tasks “at the same time”, having a useful programming abstraction, and having leverage on multicore machines as well as GPUs. The concepts that are key in concurrency are critical sections, race conditions, and mutual exclusion. We use locks to help us in multi-thread programming. When implementing a lock, we evaluate the correctness, fairness, and performance, as these are important parts of implementing it.
Throughout this week’s chapters, we learned about how threads work by seeing many examples. Our professor also showed us examples by running what we have learned through Docker.
In OSTEP 28, I enjoyed reading, "Programmers tend to brag about how much code they wrote to do something. Doing so is fundamentally broken. What one should brag about, rather, is how little code one wrote to accomplish a given task" (pg. 13). I have always though the more code you have written, the better. However, this quote gave me a better understanding of what a programmer should know.
Comments
Post a Comment