[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57179AB1.3080403@linux.vnet.ibm.com>
Date: Wed, 20 Apr 2016 23:05:21 +0800
From: Pan Xinhui <xinhui@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Waiman Long <Waiman.Long@....com>, Ingo Molnar <mingo@...hat.com>,
linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>
Subject: Re: [PATCH v2] locking/pvqspinlock: Add lock holder CPU argument
to pv_wait()
On 2016年04月20日 22:18, Peter Zijlstra wrote:
> On Wed, Apr 20, 2016 at 10:15:09PM +0800, Pan Xinhui wrote:
>>>> +static struct pv_node *pv_lookup_hash(struct qspinlock *lock)
>>>> +{
>>>> + unsigned long offset, hash = hash_ptr(lock, pv_lock_hash_bits);
>>>> + struct pv_hash_entry *he;
>>>> +
>>>> + for_each_hash_entry(he, offset, hash) {
>>>> + struct qspinlock *l = READ_ONCE(he->lock);
>>>> +
>>>> + if (l == lock)
>>>
>>> The other loop writes:
>>>
>>> if (READ_ONCE(he->lock) == lock)
>>>
>> Maybe because we check l is NULL or not later. So save one load.
>
> Ah duh, yes.
>
>>>> + return READ_ONCE(he->node);
>>>> + /*
>>>> + * Presence of an empty slot signal the end of search. We
>>>> + * may miss the entry, but that will limit the amount of
>>>> + * time doing the search when the desired entry isn't there.
>>>> + */
>>>> + else if (!l)
>>>> + break;
>>>
>>> That 'else' is entirely pointless. Also, why isn't this: return NULL;
>>>
>>>> + }
>>>> + return NULL;
>>>
>>> and this BUG() ?
>>>
>> It's not a bug, the lock might not be stored in the hashtable. in unlock function, we will unhash the lock, then what will happen is:
>
> It should be if the above becomes a return NULL, no?
>
no, the lock might not be there, even if we search the whole hashtable.
Only pv_kick_node and pv_wait_head_or_lock will hash the lock. if both vcpu's state is vcpu_running, who will hash the lock on behalf of us?
Can pv_wait return without anyone kicking it? If yes, then this not a bug.
> If we can iterate the _entire_ hashtable, this lookup can be immensely
> expensive and we should not be doing it inside of a wait-loop.
>
Powered by blists - more mailing lists