[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66fdce3a-c7f6-4ef4-ab56-7c9ece0b00e2@nokia.com>
Date: Sat, 9 Mar 2024 08:45:35 +0100
From: Stefan Wiehler <stefan.wiehler@...ia.com>
To: Joel Fernandes <joel@...lfernandes.org>,
Russell King <linux@...linux.org.uk>, "Paul E. McKenney"
<paulmck@...nel.org>, Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>, Zqiang <qiang.zhang1211@...il.com>,
linux-arm-kernel@...ts.infradead.org, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm: smp: Avoid false positive CPU hotplug Lockdep-RCU
splat
> I agree with the problem but disagree with the patch because it feels like a
> terrible workaround.
>
> Can we just use arch_spin_lock() for the cpu_asid_lock? This might require
> acquiring the raw_lock within the raw_spinlock_t, but there is precedent:
>
> arch/powerpc/kvm/book3s_hv_rm_mmu.c:245:
> arch_spin_lock(&kvm->mmu_lock.rlock.raw_lock);
>
> IMO, lockdep tracking of this lock is not necessary or possible considering the
> hotplug situation.
>
> Or is there a reason you need lockdep working for the cpu_asid_lock?
I was not aware of this possibility to bypass lockdep tracing, but this seems
to work and indeed looks like less of a workaround:
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 4204ffa2d104..4fc2c559f1b6 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -254,7 +254,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
&& atomic64_xchg(&per_cpu(active_asids, cpu), asid))
goto switch_mm_fastpath;
- raw_spin_lock_irqsave(&cpu_asid_lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(&cpu_asid_lock.raw_lock);
/* Check that our ASID belongs to the current generation. */
asid = atomic64_read(&mm->context.id);
if ((asid ^ atomic64_read(&asid_generation)) >> ASID_BITS) {
@@ -269,7 +270,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
atomic64_set(&per_cpu(active_asids, cpu), asid);
cpumask_set_cpu(cpu, mm_cpumask(mm));
- raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
+ arch_spin_unlock(&cpu_asid_lock.raw_lock);
+ local_irq_restore(flags);
switch_mm_fastpath:
cpu_switch_mm(mm->pgd, mm);
@Russell, what do you think?
Powered by blists - more mailing lists