[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a9b00df6-cacc-56e7-82d9-e7b2875aa898@redhat.com>
Date: Tue, 20 Sep 2022 17:04:35 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] locking/qspinlock: Do spin-wait in slowpath if
preemptible
On 9/20/22 15:55, Waiman Long wrote:
> There are some code paths in the kernel where arch_spin_lock() will be
> called directly when the lock isn't expected to be contended and critical
> section is short. For example, tracing_saved_cmdlines_size_read()
> in kernel/trace/trace.c does that.
>
> In most cases, preemption is also not disabled. This creates a problem
> for the qspinlock slowpath which expects preemption to be disabled
> to guarantee the safe use of per cpu qnodes structure. To work around
> these special use cases, add a preemption count check in the slowpath
> and do a simple spin-wait when preemption isn't disabled.
>
> Fixes: a33fda35e3a7 ("Introduce a simple generic 4-byte queued spinlock")
> Signed-off-by: Waiman Long <longman@...hat.com>
On second thought, I believe the proper way to fix this is to make sure
that all the callers of arch_spin_lock() has preemption properly
disabled. Will work on another patch set to do that. So please ignore
this patch and sorry for the noise.
Cheers,
Longman
Powered by blists - more mailing lists