[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <201605300855.u4U8sRoK014321@mx0a-001b2d01.pphosted.com>
Date: Mon, 30 May 2016 16:53:41 +0800
From: xinhui <xinhui.pan@...ux.vnet.ibm.com>
To: Waiman Long <waiman.long@....com>
CC: linux-kernel@...r.kernel.org, peterz@...radead.org,
mingo@...hat.com
Subject: Re: [PATCH] pv-qspinlock: Try to re-hash the lock after spurious_wakeup
On 2016年05月28日 11:41, Waiman Long wrote:
> On 05/27/2016 06:32 AM, xinhui wrote:
>>
>> On 2016年05月27日 02:31, Waiman Long wrote:
>>> On 05/25/2016 02:09 AM, Pan Xinhui wrote:
>>>> In pv_wait_head_or_lock, if there is a spurious_wakeup, and it fails to
>>>> get the lock as there is lock stealing, then after a short spin, we need
>>>> hash the lock again and enter pv_wait to yield.
>>>>
>>>> Currently after a spurious_wakeup, as l->locked is not _Q_SLOW_VAL,
>>>> pv_wait might do nothing and return directly, that is not
>>>> paravirt-friendly because pv_wait_head_or_lock will just spin on the
>>>> lock then.
>>>>
>>>> Signed-off-by: Pan Xinhui<xinhui.pan@...ux.vnet.ibm.com>
>>>> ---
>>>> kernel/locking/qspinlock_paravirt.h | 39 +++++++++++++++++++++++++++++--------
>>>> 1 file changed, 31 insertions(+), 8 deletions(-)
>>>
>>> Is this a problem you can easily reproduce on PPC? I have not observed this issue when testing on x86.
>>>
>> Hi, Waiman
>> I notice the spurious_wakeup count is very high when I do benchmark tests and stress tests. So after a simple investigation,
>> I find pv_wait_head_or_lock() just keep loops.
>>
>
> That shouldn't happen in normal case. When testing on x86, I typically get the following stat data for an over-commited guest:
>
> pv_lock_slowpath=9256211
> pv_lock_stealing=36398363
> pv_spurious_wakeup=311
> pv_wait_again=294
> pv_wait_early=3255605
> pv_wait_head=173
> pv_wait_node=3256280
>
OK, here is the result after run command perf bench sched messaging -g 512
pv_lock_slowpath=2331407
pv_lock_stealing=192038
pv_spurious_wakeup=236319
pv_wait_again=215668
pv_wait_early=177299
pv_wait_head=9206
pv_wait_node=228781
> The queue head don't call pv_wait that often. There are a bit of spurious wakeup, but it is mostly caused by lock stealing. How long is a cpu_relax() in PPC takes?
>
946012160 cpu_relax loops with 10 seconds. So if SPIN_THRESHOLD is 1<<15, it costs 0.3ms to spin on the lock. How about x86?
And only 10134976 pv_wait/pv_kick hyper-call loops within 10 seconds. so every hyper-call itself(the so-called latency) costs less than 1us.
>> Here is my story, in my pv-qspinlcok patchset V1&&v2, pv_wait on ppc ignore the first two parameters of *ptr and val, that makes lock_stealing hit too much.
>
> The pvqspinlock code does depend on pv_wait() doing a final check to see if the lock value change. The code may not work reliably without that.
>
agree, So pv_wait now do the check of *ptr and val.
>> and when I change SPIN_THRESHOLD to a small value, system is very much unstable because waiter will enter pv_wait quickly and no one will kick waiter's cpu if
>> we enter pv_wait twice thanks to the lock_stealing.
>> So what I do in my pv-qspinlcok patchset V3 is that add if (*ptr == val) in pv_wait. However as I mentioned above, then spurious_wakeup count is too high, that also means our cpu
>> slice is wasted.
>
> The SPIN_THRESHOLD should be sufficiently big. A small value will cause too many waits and wake-up's which may not be good. Anyway, more testing and tuning may be needed to make the pvqspinlock code work well with PPC.
>
agree , but I think the SPIN_THRESHOLD (1<<15) for ppc is a little large.
I even come up with an idea that make SPIN_THRESHOLD an extern variable on ppc. But I am busy and I wonder if it's worth doing that.
> Cheers,
> Longman
>
Powered by blists - more mailing lists