Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
311 views
in Technique[技术] by (71.8m points)

c++ - What will be the critical section code for a shared queue accessed by two threads?

Suppose we have a shared queue (implemented using an array), which two threads can access, one for reading data from it, and other for writing data to it. Now, I have a problem of synchronization. I'm implementing this using Win32 API's (EnterCriticalSection etc.).

But my curiosity is what will be the critical section code in enqueue and dequeue operations of the queue?

Just because, two threads are using a shared resource? Why I'm not able to see any problem is this: front and rear are maintained, so, when ReaderThread reads, it can read from front end and when WriterThread writes, it can easily write to rear end.

What potential problems can occur?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

For a single producer/consumer circular queue implementation, locks are actually not required. Simply set a condition where the producer cannot write into the queue if the queue is full and the consumer cannot read from the queue if it is empty. Also the producer will always write to a tail pointer that is pointing to the first available empty slot in the queue, and the consumer will read from a head pointer that represents the first unread slot in the queue.

You code can look like the following code example (note: I'm assuming in an initialized queue that tail == head, and that both pointers are declared volatile so that an optimizing compiler does not re-order the sequence of operations within a given thread. On x86, no memory barriers are required due to the strong memory consistency model for the architecture, but this would change on other architectures with weaker memory consistency models, where memory barriers would be required):

queue_type::pointer queue_type::next_slot(queue_type::pointer ptr);

bool queue_type::enqueue(const my_type& obj)
{
    if (next_slot(tail) == head)
        return false;

    *tail = obj;
    tail = next_slot(tail);

    return true;
}

bool queue_type::dequeue(my_type& obj)
{
    if (head == tail)
        return false;

    obj = *head;
    head = next_slot(head);

    return true;
}

The function next_slot simply increments the head or tail pointer so that it returns a pointer to the next slot in the array, and accounts for any array wrap-around functionality.

Finally, we guarantee synchronization in the single producer/consumer model because we do not increment the tail pointer until it has written the data into the slot it was pointing to, and we do not increment the head pointer until we have read the data from the slot it was pointing to. Therefore a call to dequeue will not return valid until at least one call to enequeue has been made, and the tail pointer will never over-write the head pointer because of the check in enqueue. Additionally, only one thread is incrementing the tail pointer, and one thread is incrementing the head pointer, so there are no issues with a shared read or write from or to the same pointer which would create synchronization problems necessitating a lock or some type of atomic operation.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...