[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZewycILled+mZhwe@shell.armlinux.org.uk>
Date: Sat, 9 Mar 2024 09:57:04 +0000
From: "Russell King (Oracle)" <linux@...linux.org.uk>
To: Stefan Wiehler <stefan.wiehler@...ia.com>,
Will Deacon <will@...nel.org>
Cc: Joel Fernandes <joel@...lfernandes.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang1211@...il.com>,
linux-arm-kernel@...ts.infradead.org, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm: smp: Avoid false positive CPU hotplug Lockdep-RCU
splat
On Sat, Mar 09, 2024 at 08:45:35AM +0100, Stefan Wiehler wrote:
> > I agree with the problem but disagree with the patch because it feels like a
> > terrible workaround.
> >
> > Can we just use arch_spin_lock() for the cpu_asid_lock? This might require
> > acquiring the raw_lock within the raw_spinlock_t, but there is precedent:
> >
> > arch/powerpc/kvm/book3s_hv_rm_mmu.c:245:
> > arch_spin_lock(&kvm->mmu_lock.rlock.raw_lock);
> >
> > IMO, lockdep tracking of this lock is not necessary or possible considering the
> > hotplug situation.
> >
> > Or is there a reason you need lockdep working for the cpu_asid_lock?
>
> I was not aware of this possibility to bypass lockdep tracing, but this seems
> to work and indeed looks like less of a workaround:
>
> diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
> index 4204ffa2d104..4fc2c559f1b6 100644
> --- a/arch/arm/mm/context.c
> +++ b/arch/arm/mm/context.c
> @@ -254,7 +254,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
> && atomic64_xchg(&per_cpu(active_asids, cpu), asid))
> goto switch_mm_fastpath;
>
> - raw_spin_lock_irqsave(&cpu_asid_lock, flags);
> + local_irq_save(flags);
> + arch_spin_lock(&cpu_asid_lock.raw_lock);
> /* Check that our ASID belongs to the current generation. */
> asid = atomic64_read(&mm->context.id);
> if ((asid ^ atomic64_read(&asid_generation)) >> ASID_BITS) {
> @@ -269,7 +270,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
>
> atomic64_set(&per_cpu(active_asids, cpu), asid);
> cpumask_set_cpu(cpu, mm_cpumask(mm));
> - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
> + arch_spin_unlock(&cpu_asid_lock.raw_lock);
> + local_irq_restore(flags);
>
> switch_mm_fastpath:
> cpu_switch_mm(mm->pgd, mm);
>
> @Russell, what do you think?
I think this is Will Deacon's code, so we ought to hear from Will...
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!
Powered by blists - more mailing lists