Readers-writer lock
From Wikipedia, the free encyclopedia
In computer science, a readers-writer lock (also known by the name multi-reader lock,[1] or by typographical variants such asreaders/writers lock) is a synchronization primitive that solves one of the readers-writers problems. A readers-writer lock is like a mutex, in that it controls access to some shared memory area, but it allows multiple threads to read from the shared area concurrently. Any thread that needs to write to the shared memory, of course, needs to acquire an exclusive lock.
One potential problem with a conventional RW lock is that it can lead to write-starvation, meaning that so long as at least one reading thread holds the lock, no writer thread will be able to acquire it. Since multiple reader threads may hold the lock at once, this means that a writer thread may continue waiting for the lock while new reader threads are able to acquire the lock, even to the point where the writer may still be waiting after all of the readers which were holding the lock when it first attempted to acquire it have finished their work in the shared area and released the lock. To avoid writer starvation, a variant on a readers-writer lock can be constructed which prevents any new readers from acquiring the lock if there is a writer queued and waiting for the lock, so that the writer will acquire the lock as soon as the readers which were already holding the lock are finished with it. This variation is sometimes known as a "write-preferring" or "write-biased" readers-writer lock.[2][3]
Readers-writer locks are usually constructed on top of mutexes and condition variables, or on top of semaphores. They are rarely implemented from scratch.
The read-copy-update (RCU) algorithm is one solution to the readers-writers problem. RCU is wait-free for readers. The Linux-Kernel implements a special solution for few writers called seqlock.
A read/write lock pattern or simply RWL is a software design pattern that allows concurrent read access to an object but requires exclusive access for write operations.
In this pattern, multiple readers can read the data in parallel but an exclusive lock is needed while writing the data. When a writer is writing the data, readers will be blocked until the writer is finished writing.
Producer-consumer issue (http://cs.gmu.edu/cne/modules/ipc/aqua/producer.html)
==================
The code might look like this:
BufferSize = 3; count = 0; Producer() { int widget; WHILE (true) { // loop forever make_new(widget); // create a new widget to put in the buffer IF(count==BufferSize) Sleep(); // if the buffer is full, sleep put_item(widget); // put the item in the buffer count = count + 1; // increment count of items IF (count==1) Wakeup(Consumer); // if the buffer was previously empty, wake } // the consumer }
Consumer() { int widget; WHILE(true) { // loop forever IF(count==0) Sleep(); // if the buffer is empty, sleep remove_item(widget); // take an item from the buffer count = count + 1; // decrement count of items IF(count==N-1) Wakeup(Producer); // if buffer was previously full, wake // the producer Consume_item(widget); // consume the item } }
Keep in mind these four conditions, which should be satisfied by any good mutual exclusion method:
- No two processes may enter their critical sections at the same time.
- No assumptions may be made about speeds or numbers of processors.
- No process not in its critical section may block another process.
- No process should have to wait forever to enter its critical section.
Busy Waiting
How do we implement a mutual exclusion policy? No matter what policy we implement, a process that is about to enter its critical section must first check to see if any other processes are in their critical sections. As we shall see, there are, potentially, an number of ways to enforce mutual exclusion. Ideally, we would like to implement a policy that works in the most efficient way possible.
What about using a lock variable, which must be tested by each process before it enters its critical section? If another process is already in its critical section, the lock is set to 1, and the process currently using the processor is not permitted to enter its critical section. If the value of the lock variable is 0, then the process enters its critical section, and it sets the lock to 1. The problem with this potential solution is that the operation that reads the value of the lock variable, the operation that compares that value to 0, and the operation that sets the lock, are three different atomic actions. With this solution, it is possible that one process might test the lock variable, see that it is open, but before it can set the lock, another process is scheduled, runs, sets the lock and enters its critical section. When the original process returns, it too will enter its critical section, violation the policy of mutual exclusion.
Sleep and Wakeup
As we have seen, busy waiting can be wasteful. Processes waiting to enter their critical sections waste processor time checking to see if they can proceed. A better solution to the mutual exclusion problem, which can be implemented with the addition of some new primitives, would be to block processes when they are denied access to their critical sections. Two primitives, Sleep and Wakeup, are often used to implement blocking in mutual exclusion.
How do Sleep and Wakeup Work?
Essentially, when a process is not permitted to access its critical section, it uses a system call known as Sleep, which causes that process to block. The process will not be scheduled to run again, until another process uses the Wakeup system call. In most cases, Wakeup is called by a process when it leaves its critical section if any other processes have blocked.
As we have seen, sleep and wakeup is a solution to the problem of race conditions which does not require busy waiting. One must be careful in implementing a sleep and wakeup policy, since race conditions can arise in certain circumstances. Consider the following implementation of sleep and wakeup.
var occupied; // 1 if critical section is occupied, 0 if not. var blocked; // counts the number of blocked processes Enter_Region: // process enters its critical section { IF (occupied){ // if critical section occupied THEN blocked = blocked + 1; // increment blocked counter sleep(); // go to "sleep", or block } ELSE occupied = 1; // if can enter critical section, // increment counter } ... ... // critical section ... Exit_Region: // process exits its critical section { occupied = 0; IF (blocked){ THEN wakeup(process); // if another process is sleeping, // wake the process up. blocked = blocked - 1; // decrement blocked counter } }
In the example above, a process wanting to enter its critical section must first check to see if any other processes are currently in the critical section. If so, the process calls the Sleep() system call, which causes it to block. Upon leaving its critical section, a process must check to see if any other processes have been put to sleep, waiting to enter their critical section. If a process is waiting, the process now exiting its critical section must wakeup the sleeping process.
Now consider what would happen if two processes, A and B, where using this method for mutual exclusion. Process A enters its critical section, and before it leaves the critical section, process B is scheduled. Now process B attempts to enter its critical section. It sees that process A is already there, so it increments the "blocked" variable in preparation to go to sleep. Just before it can call the Sleep primitive, process A is scheduled again. Process A exits its critical section. Upon exiting, it sees that the "blocked" variable is now 1, so it calls Wakeup on process B. But process B is not yet asleep. The Wakeup is lost, so when process B is scheduled again, it will call Sleep, and block forever.
Sleep and Wakeup using Message Passing
So far we have discussed two different types of interprocess communication: semaphores and monitors. These methods work well on computers with shared memory, but they are not effective on distributed systems. Semaphores and monitors provide no machine to machine communication capability. The last method we will discuss,Message passing, solves this problem. Two primitives, SEND and RECEIVE are used in the message passing scheme. The SEND primitive sends a message to a destination process while the RECEIVE primitive receives a message from a specified source process. Message Passing works on distributed systems because these messages can be sent from machine to machine through a network.
No comments:
Post a Comment