[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150714204507.GN19282@twins.programming.kicks-ass.net>
Date: Tue, 14 Jul 2015 22:45:07 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <waiman.long@...com>
Cc: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH 6/7] locking/qspinlock: A fairer queued unfair lock
On Tue, Jul 14, 2015 at 02:47:16PM -0400, Waiman Long wrote:
> On 07/12/2015 04:21 AM, Peter Zijlstra wrote:
> >On Sat, Jul 11, 2015 at 04:36:57PM -0400, Waiman Long wrote:
> >>For a virtual guest with the qspinlock patch, a simple unfair byte lock
> >>will be used if PV spinlock is not configured in or the hypervisor
> >>isn't either KVM or Xen.
> >Why do we care about this case enough to add over 300 lines of code?
>
> From my testing, I found the queued unfair lock to be superior to both the
> byte lock or the PV qspinlock when the VM is overcommitted. My current
> opinion is to use PV qspinlock for VMs that are not likely to run into the
> overcommited problem. For other VMs that are overcommitted, it will be
> better to use the queued unfair lock. However, this is a choice that the
> system administrators have to made. That is also the reason why I sent out
> another patch to add a KVM command line option to disable PV spinlock like
> what Xen already has. In this way, depending on how the kernel is booted, we
> can choose either PV qspinlock or queued unfair lock.
No, we're not going to add another 300 line lock implementation and a
knob.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists