[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57179409.2010107@linux.vnet.ibm.com>
Date: Wed, 20 Apr 2016 22:36:57 +0800
From: Pan Xinhui <xinhui@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Waiman Long <Waiman.Long@....com>, Ingo Molnar <mingo@...hat.com>,
linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>
Subject: Re: [PATCH v2] locking/pvqspinlock: Add lock holder CPU argument
to pv_wait()
On 2016年04月20日 22:19, Peter Zijlstra wrote:
> On Wed, Apr 20, 2016 at 10:15:09PM +0800, Pan Xinhui wrote:
>> So there is such case that we search the whole hashtable and the lock is not found. :(
>> Waiman assume that if l = null, the lock is not stored. however the lock might be there actually.
>> But to avoid the worst case I just mentioned above, it can quickly finish the lookup.
>
>
>>>> +
>>>> + /*
>>>> + * We try to locate the queue head pv_node by looking
>>>> + * up the hash table. If it is not found, use the
>>>> + * CPU in the previous node instead.
>>>> + */
>>>> + hn = pv_lookup_hash(lock);
>>>> + if (!hn)
>>>> + hn = pn;
>>>
>>> This is potentially expensive... it does not explain why this lookup can
>>> fail etc.. nor mentioned that lock stealing caveat.
>>>
>> Yes, it's expensive. Normally, PPC phyp don't always need the correct
>> holder. That means current vcpu can just give up its slice. There is
>> one lpar hvcall H_CONFER. I paste some spec below.
>
> Ok, so if we can indeed scan the _entire_ hashtable, then we really
> should not have that in common code. That's seriously expensive.
>
Okay, I will try to add the holder lookup code in arch/...
But I just come up with one idea,
in __pv_queued_spin_unlock_slowpath()
we will kick the node->cpu, who will become the holder soon.
I think we can somehow record the node->cpu and use it in pv_wait_node :)
thanks
xinhui
Powered by blists - more mailing lists