[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b5c5cf3-1b8b-4815-8d19-2e28c9751305@redhat.com>
Date: Sat, 30 Dec 2023 10:57:29 -0500
From: Waiman Long <longman@...hat.com>
To: David Laight <David.Laight@...LAB.COM>,
"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
"'peterz@...radead.org'" <peterz@...radead.org>
Cc: "'mingo@...hat.com'" <mingo@...hat.com>,
"'will@...nel.org'" <will@...nel.org>,
"'boqun.feng@...il.com'" <boqun.feng@...il.com>,
'Linus Torvalds' <torvalds@...ux-foundation.org>,
"'xinhui.pan@...ux.vnet.ibm.com'" <xinhui.pan@...ux.vnet.ibm.com>,
"'virtualization@...ts.linux-foundation.org'"
<virtualization@...ts.linux-foundation.org>,
'Zeng Heng' <zengheng4@...wei.com>
Subject: Re: [PATCH next 5/5] locking/osq_lock: Optimise vcpu_is_preempted()
check.
On 12/29/23 22:13, Waiman Long wrote:
>
> On 12/29/23 15:58, David Laight wrote:
>> The vcpu_is_preempted() test stops osq_lock() spinning if a virtual
>> cpu is no longer running.
>> Although patched out for bare-metal the code still needs the cpu number.
>> Reading this from 'prev->cpu' is a pretty much guaranteed have a
>> cache miss
>> when osq_unlock() is waking up the next cpu.
>>
>> Instead save 'prev->cpu' in 'node->prev_cpu' and use that value instead.
>> Update in the osq_lock() 'unqueue' path when 'node->prev' is changed.
>>
>> This is simpler than checking for 'node->prev' changing and caching
>> 'prev->cpu'.
>>
>> Signed-off-by: David Laight <david.laight@...lab.com>
>> ---
>> kernel/locking/osq_lock.c | 14 ++++++--------
>> 1 file changed, 6 insertions(+), 8 deletions(-)
>>
>> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
>> index b60b0add0161..89be63627434 100644
>> --- a/kernel/locking/osq_lock.c
>> +++ b/kernel/locking/osq_lock.c
>> @@ -14,8 +14,9 @@
>> struct optimistic_spin_node {
>> struct optimistic_spin_node *self, *next, *prev;
>> - int locked; /* 1 if lock acquired */
>> - int cpu; /* encoded CPU # + 1 value */
>> + int locked; /* 1 if lock acquired */
>> + int cpu; /* encoded CPU # + 1 value */
>> + int prev_cpu; /* actual CPU # for vpcu_is_preempted() */
>> };
>> static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node,
>> osq_node);
>> @@ -29,11 +30,6 @@ static inline int encode_cpu(int cpu_nr)
>> return cpu_nr + 1;
>> }
>> -static inline int node_cpu(struct optimistic_spin_node *node)
>> -{
>> - return node->cpu - 1;
>> -}
>> -
>> static inline struct optimistic_spin_node *decode_cpu(int
>> encoded_cpu_val)
>> {
>> int cpu_nr = encoded_cpu_val - 1;
>> @@ -114,6 +110,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
>> if (old == OSQ_UNLOCKED_VAL)
>> return true;
>> + node->prev_cpu = old - 1;
>> prev = decode_cpu(old);
>> node->prev = prev;
>> node->locked = 0;
>> @@ -148,7 +145,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
>> * polling, be careful.
>> */
>> if (smp_cond_load_relaxed(&node->locked, VAL || need_resched() ||
>> - vcpu_is_preempted(node_cpu(node->prev))))
>> + vcpu_is_preempted(node->prev_cpu)))
On second thought, I believe it is more correct to use READ_ONCE() to
access "node->prev_cpu" as this field is subjected to change by a
WRITE_ONCE().
Cheers,
Longman
Powered by blists - more mailing lists