LOCK HOLDER PREEMPTION
One of the issues of full virtualization is the Lock Holder Preemption (LHP) problem which occurs on multiprocessors VMs
when a the Guest-OS thread gets a spinlock in one vCPU and the Hypervisor o Virtual Machine Monitor (VMM) preempt it before it could release
the lock.
The other Guest-OS threads remain spinning without any possibility of acquiring the lock, consuming processors cycles.
In this post I propose a locking mechanism for multiprocessors to be analyzed by the stackoverflow community.
The trick is to allocate a processor to each spinlock.
// The new lock structure
struct new_lock {
spinlock new_spinlock; // The spinlock structure of the linux kernel
int new_spinproc; // New field for the processor allocated to the lock
}
// When the spinlock is created, the current thread's processor is allocated to the lock
// but any other allocation method could be used instead. i.e. Allocate the processor with
// less lock count.
new_init_lock (new_lock lock)
{
lock.new_spinCPU = getcpu(); // Allocate the current processor to the spinlock
init_spinlock(lock.new_spinlock); // initialize linux spinlock
}
// When a thread need to acquire the lock, the kernel scheduler changes it to the spinlock's
// processor ready queue. This can be set by changing the thread′s CPU affinity mask (sched_setaffinity).
new_spin_lock()
{
switch_cpu(lock.new_spinproc); // change the thread to the spinlock's processor ready queue
spin_lock(lock.new_spinlock); // normal spinlock acquire
}
// When the thread release the lock, the kernel scheduler changes it to a processor ready queue
// other than the spinlock's processor. This helps to the next thread acquiring the lock.
new_spin_unlock(){
int new_cpu;
spin_unlock(lock.new_spinlock); // normal spinlock release
new_cpu = other_cpu(lock.new_spinproc); // choose another processor
switch_cpu(new_cpu);
}
These mechanism has the following advantages:
- all threads which need to acquire the lock are enqueue in the processor′s ready queue without consuming CPU cycles.
- As the ready queue has priorities, it could be used by real time tasks, avoiding the priority inversion problem.
- the LHP problem disappears
One open issue remains: if the thread with the lock is rescheduled before releasing the lock, it could be enqueued
at the ready queue tail, and all the threads waiting the lock release will consume their timeslices.
To avoid this problem, before acquiring the lock, the kernel could check if the lock is busy and enqueue all
the other threads behind the lock holder thread (LHT).
new_spin_lock2()
{
switch_cpu(lock.new_spinproc); // change the thread to the spinlock's processor ready queue
while ( atomic_read(lock.new_spinlock) == LOCKED){
schedule(); // move the current thread to the ready queue tail, behind the LHT.
}
// <<< here the current thread could be preempted by other higher priority thread
// which could acquire the lock
spin_lock(lock.new_spinlock); // normal spinlock acquire
}
Conclusion: this mechanism enqueue active threads which need to acquire a lock behind the LHT.
I will appreciate your opinions and criticisms
question from:
https://stackoverflow.com/questions/65545484/lock-holder-preemption-mechanism 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…