[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55A5B98D.7010008@hp.com>
Date: Tue, 14 Jul 2015 21:38:21 -0400
From: Waiman Long <waiman.long@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH 2/7] locking/pvqspinlock: Allow vCPUs kick-ahead
On 07/13/2015 09:52 AM, Peter Zijlstra wrote:
> On Sat, Jul 11, 2015 at 04:36:53PM -0400, Waiman Long wrote:
>> Frequent CPU halting (vmexit) and CPU kicking (vmenter) lengthens
>> critical section and block forward progress. This patch implements
>> a kick-ahead mechanism where the unlocker will kick the queue head
>> vCPUs as well as up to two additional vCPUs next to the queue head if
>> they were halted. The kickings are done after exiting the critical
>> section to improve parallelism.
>>
>> The amount of kick-ahead allowed depends on the number of vCPUs in
>> the VM guest. This change should improve overall system performance
>> in a busy overcommitted guest.
> -ENONUMBERS... also highly workload sensitive, if the lock hold time is
> just above our spin time you're wasting gobs of runtime.
Currently SPIN_THRESHOLD is (1<<15). My test of the pause instruction is
about 3ns in my 2.5GHz Haswell-EX system. That translates to a spinning
time of at least 100us. I don't think we have critical sections in the
kernel that take that long. I also found out that the kick-to-wakeup
time can be pretty long, kicking ahead will enable the next cpu waiting
in line to get the lock faster as some of the wakeup time will overlap
with the critical sections of previous CPUs. Will have the performance
number in the v2 patch.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists