Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
233 views
in Technique[技术] by (71.8m points)

linux kernel - LOCK HOLDER PREEMPTION mechanism

LOCK HOLDER PREEMPTION

One of the issues of full virtualization is the Lock Holder Preemption (LHP) problem which occurs on multiprocessors VMs when a the Guest-OS thread gets a spinlock in one vCPU and the Hypervisor o Virtual Machine Monitor (VMM) preempt it before it could release the lock. The other Guest-OS threads remain spinning without any possibility of acquiring the lock, consuming processors cycles.

In this post I propose a locking mechanism for multiprocessors to be analyzed by the stackoverflow community.

The trick is to allocate a processor to each spinlock.

// The new lock structure 
struct new_lock {
   spinlock     new_spinlock;   // The spinlock structure of the linux kernel
   int          new_spinproc;   // New field for the processor allocated to the lock 
}

// When the spinlock is created, the current thread's processor is allocated to the lock 
// but any other allocation method could be used instead. i.e. Allocate the processor with 
// less lock count.
new_init_lock (new_lock  lock)
{
    lock.new_spinCPU = getcpu();        // Allocate the current processor to the spinlock       
    init_spinlock(lock.new_spinlock);   // initialize linux spinlock
}

// When a thread need to acquire the lock, the kernel scheduler changes it to the spinlock's  
// processor ready queue. This can be set by changing the thread′s CPU affinity mask (sched_setaffinity).
new_spin_lock()
{
    switch_cpu(lock.new_spinproc); // change the thread to the spinlock's processor ready queue                             
    spin_lock(lock.new_spinlock);  // normal spinlock acquire    
}

// When the thread release the lock, the kernel scheduler changes it to a processor ready queue
// other than the spinlock's processor. This helps to the next thread acquiring the lock. 
new_spin_unlock(){
    int new_cpu;
    spin_unlock(lock.new_spinlock);     // normal spinlock release 
    new_cpu = other_cpu(lock.new_spinproc); // choose another processor 
    switch_cpu(new_cpu);
}

These mechanism has the following advantages:

  1. all threads which need to acquire the lock are enqueue in the processor′s ready queue without consuming CPU cycles.
  2. As the ready queue has priorities, it could be used by real time tasks, avoiding the priority inversion problem.
  3. the LHP problem disappears

One open issue remains: if the thread with the lock is rescheduled before releasing the lock, it could be enqueued at the ready queue tail, and all the threads waiting the lock release will consume their timeslices. To avoid this problem, before acquiring the lock, the kernel could check if the lock is busy and enqueue all the other threads behind the lock holder thread (LHT).

new_spin_lock2()
{
    switch_cpu(lock.new_spinproc); // change the thread to the spinlock's processor ready queue 
    while ( atomic_read(lock.new_spinlock) == LOCKED){
        schedule(); // move the current thread to the ready queue tail, behind the LHT.
    }
    // <<< here the current thread could be preempted by other higher priority thread 
    // which could acquire the lock 
    spin_lock(lock.new_spinlock);  // normal spinlock acquire    
}

Conclusion: this mechanism enqueue active threads which need to acquire a lock behind the LHT.

I will appreciate your opinions and criticisms

question from:https://stackoverflow.com/questions/65545484/lock-holder-preemption-mechanism

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...