[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <57512B73.5010005@linux.vnet.ibm.com>
Date: Fri, 03 Jun 2016 15:02:11 +0800
From: xinhui <xinhui.pan@...ux.vnet.ibm.com>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
virtualization@...ts.linux-foundation.org
CC: paulus@...ba.org, mpe@...erman.id.au, peterz@...radead.org,
mingo@...hat.com, paulmck@...ux.vnet.ibm.com, waiman.long@....com
Subject: Re: [PATCH v5 1/6] qspinlock: powerpc support qspinlock
On 2016年06月03日 12:33, Benjamin Herrenschmidt wrote:
> On Fri, 2016-06-03 at 12:10 +0800, xinhui wrote:
>> On 2016年06月03日 09:32, Benjamin Herrenschmidt wrote:
>>> On Fri, 2016-06-03 at 11:32 +1000, Benjamin Herrenschmidt wrote:
>>>> On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
>>>>>
>>>>> Base code to enable qspinlock on powerpc. this patch add some
>>>>> #ifdef
>>>>> here and there. Although there is no paravirt related code, we
>> can
>>>>> successfully build a qspinlock kernel after apply this patch.
>>>> This is missing the IO_SYNC stuff ... It means we'll fail to do a
>>>> full
>>>> sync to order vs MMIOs.
>>>>
>>>> You need to add that back in the unlock path.
>>>
>>> Well, and in the lock path as well...
>>>
>> Oh, yes. I missed IO_SYNC stuff.
>>
>> thank you, Ben :)
>
> Ok couple of other things that would be nice from my perspective (and
> Michael's) if you can produce them:
>
> - Some benchmarks of the qspinlock alone, without the PV stuff,
> so we understand how much of the overhead is inherent to the
> qspinlock and how much is introduced by the PV bits.
>
> - For the above, can you show (or describe) where the qspinlock
> improves things compared to our current locks. While there's
> theory and to some extent practice on x86, it would be nice to
> validate the effects on POWER.
>
> - Comparative benchmark with the PV stuff in on a bare metal system
> to understand the overhead there.
>
> - Comparative benchmark with the PV stuff under pHyp and KVM
>
Will do such benchmark tests in next days.
thanks for your kind suggestions. :)
> Spinlocks are fiddly and a critical piece of infrastructure, it's
> important we fully understand the performance implications before we
> decide to switch to a new model.
>
yes, We really need understand how {pv}qspinlock works in more complex cases.
thanks
xinhui
> Cheers,
> Ben.
>
Powered by blists - more mailing lists