[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <835f65d9-a041-4956-b89d-7cd3660c3ab8@redhat.com>
Date: Fri, 29 Dec 2023 22:20:25 -0500
From: Waiman Long <longman@...hat.com>
To: David Laight <David.Laight@...LAB.COM>,
"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
"'peterz@...radead.org'" <peterz@...radead.org>
Cc: "'mingo@...hat.com'" <mingo@...hat.com>,
"'will@...nel.org'" <will@...nel.org>,
"'boqun.feng@...il.com'" <boqun.feng@...il.com>,
'Linus Torvalds' <torvalds@...ux-foundation.org>,
"'xinhui.pan@...ux.vnet.ibm.com'" <xinhui.pan@...ux.vnet.ibm.com>,
"'virtualization@...ts.linux-foundation.org'"
<virtualization@...ts.linux-foundation.org>,
'Zeng Heng' <zengheng4@...wei.com>
Subject: Re: [PATCH next 2/5] locking/osq_lock: Avoid dirtying the local cpu's
'node' in the osq_lock() fast path.
On 12/29/23 17:11, David Laight wrote:
> osq_lock() starts by setting node->next to NULL and node->locked to 0.
> Careful analysis shows that node->next is always NULL on entry.
>
> node->locked is set non-zero by another cpu to force a wakeup.
> This can only happen after the 'prev->next = node' assignment,
> so locked can be set to zero just before that (along with the assignment
> to node->prev).
>
> Only initialise node->cpu once, after that use its value instead
> of smp_processor_id() - which is probably a real function call.
>
> Should reduce cache-line bouncing a little.
>
> Signed-off-by: David Laight <david.laight@...lab.com>
> ---
>
> Re-send without the 'RE:' on the subject line.
> kernel/locking/osq_lock.c | 13 ++++++-------
> 1 file changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index d414eef4bec6..55f5db896c02 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -51,7 +51,7 @@ osq_wait_next(struct optimistic_spin_queue *lock,
> struct optimistic_spin_node *prev)
> {
> struct optimistic_spin_node *next = NULL;
> - int curr = encode_cpu(smp_processor_id());
> + int curr = node->cpu;
> int old;
>
> /*
> @@ -98,12 +98,10 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> {
> struct optimistic_spin_node *node = this_cpu_ptr(&osq_node);
> struct optimistic_spin_node *prev, *next;
> - int curr = encode_cpu(smp_processor_id());
> int old;
>
> - node->locked = 0;
> - node->next = NULL;
I have some concern about not clearing node->next at the beginning. I
know that next is supposed to be cleared at the end. I do have some
worry that there may exist a race condition that leaves next not cleared
before it is used again. So you either have to prove that such a race
does not exist, or initializing it to NULL at the beginning like it is
today.
Cheers,
Longman
> - node->cpu = curr;
> + if (unlikely(node->cpu == OSQ_UNLOCKED_VAL))
> + node->cpu = encode_cpu(smp_processor_id());
>
> /*
> * We need both ACQUIRE (pairs with corresponding RELEASE in
> @@ -111,12 +109,13 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> * the node fields we just initialised) semantics when updating
> * the lock tail.
> */
> - old = atomic_xchg(&lock->tail, curr);
> + old = atomic_xchg(&lock->tail, node->cpu);
> if (old == OSQ_UNLOCKED_VAL)
> return true;
>
> prev = decode_cpu(old);
> node->prev = prev;
> + node->locked = 0;
>
> /*
> * osq_lock() unqueue
> @@ -214,7 +213,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> void osq_unlock(struct optimistic_spin_queue *lock)
> {
> struct optimistic_spin_node *node, *next;
> - int curr = encode_cpu(smp_processor_id());
> + int curr = raw_cpu_read(osq_node.cpu);
>
> /*
> * Fast path for the uncontended case.
Powered by blists - more mailing lists