[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <551D6E2E.1080801@hp.com>
Date: Thu, 02 Apr 2015 12:28:30 -0400
From: Waiman Long <waiman.long@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
paolo.bonzini@...il.com, konrad.wilk@...cle.com,
boris.ostrovsky@...cle.com, paulmck@...ux.vnet.ibm.com,
riel@...hat.com, torvalds@...ux-foundation.org,
raghavendra.kt@...ux.vnet.ibm.com, david.vrabel@...rix.com,
oleg@...hat.com, scott.norton@...com, doug.hatch@...com,
linux-arch@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
xen-devel@...ts.xenproject.org, kvm@...r.kernel.org,
luto@...capital.net
Subject: Re: [PATCH 8/9] qspinlock: Generic paravirt support
On 04/01/2015 05:03 PM, Peter Zijlstra wrote:
> On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote:
>> On 04/01/2015 02:48 PM, Peter Zijlstra wrote:
>> I am sorry that I don't quite get what you mean here. My point is that in
>> the hashing step, a cpu will need to scan an empty bucket to put the lock
>> in. In the interim, an previously used bucket before the empty one may get
>> freed. In the lookup step for that lock, the scanning will stop because of
>> an empty bucket in front of the target one.
> Right, that's broken. So we need to do something else to limit the
> lookup, because without that break, a lookup that needs to iterate the
> entire array in order to determine -ENOENT, which is expensive.
>
> So my alternative proposal is that IFF we can guarantee that every
> lookup will succeed -- the entry we're looking for is always there, we
> don't need the break on empty but can probe until we find the entry.
> This will be bound in cost to the same number if probes we required for
> insertion and avoids the full array scan.
>
> Now I think we can indeed do this, if as said earlier we do not clear
> the bucket on insert if the cmpxchg succeeds, in that case the unlock
> will observe _Q_SLOW_VAL and do the lookup, the lookup will then find
> the entry. And we then need the unlock to clear the entry.
> _Q_SLOW_VAL
> Does that explain this? Or should I try again with code?
OK, I got your proposal now. However, there is still the issue that
setting the _Q_SLOW_VAL flag and the hash bucket are not atomic wrt each
other. It is possible a CPU has set the _Q_SLOW_VAL flag but not yet
filling in the hash bucket while another one is trying to look for it.
So we need to have some kind of synchronization mechanism to let the
lookup CPU know when is a good time to look up.
One possibility is to delay setting _Q_SLOW_VAL until the hash bucket is
set up. Maybe we can make that work.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists