Last Updated on
If you’ve been following along, I’m currently studying for my PhD qualifying exam. As a part of that exam, I need to brush up on my operating systems knowledge a bit. Previously, I covered a few process synchronization mechanisms, and now I want to start digging into how those mechanisms can be used to solve real problems like the Readers-Writers problem.
Table of Contents
Readers-Writers Problem Overview
In concurrency, one of the common challenges is balancing who has access to a shared variable. In particular, what sort of rules should we set around shared memory access, so that actions are safe but also efficient?
More specifically, the Readers-Writers problem focuses on the challenges related to balancing threads or processes which either wish to read from a shared memory location or write to it. In other words, how do we go about scheduling the threads such that we get our desired outcome (i.e. readers priority, writers priority, etc.)?
In this article, we’ll look at a few Readers-Writers scenarios and try to solve them using some of the mechanisms we chatted about last time like semaphores and monitors. To be clear, all code in this article will be pseudocode except potential examples leveraging Java’s concurrency features. All examples are borrowed from The Ohio State University’s CSE 6431 lecture notes:
- OSU CSE 6431 Process Synchronization Part 2 (Semaphores)
- OSU CSE 6431 Process Synchronization Part 3 (Semaphores)
- OSU CSE 6431 Process Synchronization Part 4 (Monitors)
Naturally, all analysis is my own.
Readers-Writers Problem Solutions
In this section, we’ll look at various solutions to the Readers-Writers problem using different process synchronization mechanisms. Each subsection contains a different type of solution which are subsequently organized into subsections by mechanism.
Orchestrating several readers and writers can be challenging. One way to solve that issue is to organize the reads and writes such that only one process can access shared memory at a time. With this solution, correctness is easy to achieve. However, efficiency is not guaranteed as there is no mechanism for process priority.
With semaphores, we can define two procedures: reader and writer. Each procedure is protected by the same mutex:
procedure reader(): P(mutex) <read> V(mutex) procedure writer(): P(mutex) <write> V(mutex)
With monitors, the shared resource can be defined inside the monitor. Then, we setup two procedures: reader and writer. Since monitor resources are protected, we can casually call the procedures without worrying about any race conditions:
procedure reader(): <read> procedure writer(): <write>
In general, this will get the job done, but it’s not ideal.
Concurrent Reader Solution
Since read operations have no affect on a shared resource, we usually allow them to occur concurrently. In other words, if two readers happen to come along when there is nothing writing to our shared resource, we should let both readers read.
With semaphores, the reading process has to be broken into three stages:
- Check if it is safe to read (i.e. no writers currently writing)
- Check if it is safe to write (i.e. no more readers)
Also, we have to introduce an additional mutex, so we can start tracking readers in one queue and writers in another queue. On top of all that, we also have to introduce a shared variable which we’ll use to track the number of active readers:
procedure reader(): # Stage 1 P(reader_mutex) if readers = 0: P(writer_mutex) readers++ V(reader_mutex) # Stage 2 <read> # Stage 3 P(reader_mutex) readers-- if readers = 0: V(writer_mutex) V(reader_mutex) procedure writer(): P(writer_mutex) <write> V(writer_mutex)
Notice how the
readers variable is used to draw a connection between the readers and writers. If a reader realizes it’s first, it needs to snag the writer mutex to avoid any shared memory access issues. If successful, the readers hold onto that mutex until there aren’t any readers left.
If for some reason the first reader doesn’t get the writer mutex, it’s stuck waiting while also holding onto the reader mutex. In other words, all readers are frozen until the first reader gets the writer mutex.
Like semaphores, the monitors solution to concurrent reader access needs a few changes. In particular, we’re going to need to split the reader procedure into two procedures. That way, we can remove the reading functionality from the monitor, so reads don’t occur serially. In addition, we’ll need to define a shared readers variable just like before.
That said, there is a slight difference in the monitor solution. Instead of maintaining two queues explicitly, we maintain one using a condition variable called writer:
procedure begin_read(): readers++ # Reading occurs between these procedures procedure end_read(): readers-- if readers = 0: writer.signal procedure write(): if readers > 0: writer.wait <write> writer.signal
Since only one process can execute a monitor procedure at a time, we don’t have to worry about any sort of race conditions in that regard. However, since the shared resource (i.e a file) is not protected by our monitor, we have to prevent writers from writing during concurrent reads. To do that, we introduce a writer condition variable which we use to stall writers if there are any readers.
Otherwise, life is pretty simple for a reader. If we’re able to read (i.e. not blocked by the monitor), we increment the number of readers before reading. After reading, we decrement the number of readers. If the current reader is the last reader, it hands control off to any waiting writers.
Readers’ Priority Solution
In essence, the concurrent reader solution works by giving priority to whichever process arrives first. To some extent, the readers get priority. After all, as long as there are readers, they will continue to read. However, writers never hand off control to readers, so it’s possible that writers could starve readers for awhile (and vice versa).
If we want a true readers’ priority solution, we have to look into a mechanism where writers pass control over to readers when they’re finished.
In order to craft a readers’ priority solution with semaphores, we have to introduce yet another semaphore:
procedure reader(): # Stage 1 P(reader_mutex) if readers = 0: P(writer_mutex) readers++ V(reader_mutex) # Stage 2 <read> # Stage 3 P(reader_mutex) readers-- if readers = 0: V(writer_mutex) V(reader_mutex) procedure writer(): P(sync_mutex) P(writer_mutex) <write> V(writer_mutex) V(sync_mutex)
At first glance, this solution looks exactly like the concurrent readers solution. Of course, the difference is in the writer procedure which leverages the new mutex. How could two additional lines guarantee readers’ priority?
As it turns out, the new mutex now ensures that writer processes are only ever enqueued for the
writer_mutex when they already have the
sync_mutex. In other words, there can only ever be one writer waiting for the
writer_mutex at a time. As a result, there is no way for a writer to pass control off to another writer if a reader is waiting for the
Unfortunately, the monitor solution isn’t nearly as elegant as the semaphore solution. On top of adding a new condition variable and a new shared variable, the monitor solution needs to break apart its writer procedure:
procedure begin_read(): if writing: safe_read.wait readers++ safe_read.signal procedure end_read(): readers-- if readers = 0: safe_write.signal procedure begin_write(): if writing or readers > 0: safe_write.wait writing = true procedure end_write(): writing = false if safe_read.queue: safe_read.signal else: safe_write.signal
Unlike the previous monitor solution, this solution relies quite a bit more on driving access through queue signaling. In particular, we maintain two condition variables:
safe_write. If it’s safe to read, we signal the
safe_read queue. Meanwhile, if it’s safe to write, we signal the
From a reading perspective, not much has changed. If there is any active writing, readers are expected to wait. Otherwise, they read. When readers are finished reading, they are expected to decrement their reader count. As with the concurrent readers solution, the last reader is responsible for signaling the next writer.
From a writing perspective, a lot has changed. In addition to a new shared variable called
writing, we now have two procedures instead of one:
begin_write procedure is responsible for verifying that it’s safe to write (i.e. no one else is reading, and there are no readers). If it’s not safe to write, writers are expected to wait. Otherwise, they indicate that they’re writing before writing.
end_write procedure indicates that writing has finished. If there are any readers waiting, writers are responsible for signaling them (aka readers’ priority). Otherwise, they signal another writer.
To me, while more complex, this solution seems a lot more intuitive than the semaphore solution. In particular, we have direct communication between processes which seems more like how we would perform a task like this in our daily lives.
In this article, we focused on two process synchronization mechanisms, semaphores and monitors, and how they can be used to solve a few versions of the Readers-Writers problem. In addition to the problems covered in this article, there are handful of related problems including:
- Writers’ priority
- Smallest job first
- Alternating readers and writers
- Limiting concurrent readers
And, I’m sure there are many more. In the interest of time, I chose not to solve these problems. However, if you’re interesting in learning more about these topics, I’d be happy to expand this list.
Want to Learn More?
At this point, I’m now through three topics I need to study for this qualifying exam. For now, I’m going to continue slogging through operating systems. However, at some point, I’m really going to need to tackle algorithms.
If you’re looking for recommendations of related material, check out some of the following books:
- The Little Book of Semaphores: The Ins and Outs of Concurrency Control and Common Mistakes by Allen Downey
- Java Concurrency in Practice by Brian Goetz
I can’t personally endorse any of these books, but I have verified their relevance to process synchronization. In addition, both books are highly rated, and the reviews are solid. For instance, John Robertson gave the Java Concurrency in Practice book a 5-star review with the following testimony:
This book is really an important book for multithreaded programming. Even if you never touch Java, if you used multiple threads you really ought to make sure you know the pitfalls this book covers.
If books aren’t your thing, you’re always welcome to browse some of my favorite articles on this site:
- The Difference Between Statements and Expressions
- How to Teach Arrays in Computer Science
- Working with Challenging College Students
As always, thanks for stopping by!