lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240313003244.GA29568@willie-the-truck>
Date: Wed, 13 Mar 2024 00:32:44 +0000
From: Will Deacon <will@...nel.org>
To: "Russell King (Oracle)" <linux@...linux.org.uk>
Cc: Stefan Wiehler <stefan.wiehler@...ia.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	"Paul E. McKenney" <paulmck@...nel.org>,
	Josh Triplett <josh@...htriplett.org>,
	Boqun Feng <boqun.feng@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Lai Jiangshan <jiangshanlai@...il.com>,
	Zqiang <qiang.zhang1211@...il.com>,
	linux-arm-kernel@...ts.infradead.org, rcu@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm: smp: Avoid false positive CPU hotplug Lockdep-RCU
 splat

On Tue, Mar 12, 2024 at 10:39:30PM +0000, Russell King (Oracle) wrote:
> On Tue, Mar 12, 2024 at 10:14:40PM +0000, Will Deacon wrote:
> > On Sat, Mar 09, 2024 at 09:57:04AM +0000, Russell King (Oracle) wrote:
> > > On Sat, Mar 09, 2024 at 08:45:35AM +0100, Stefan Wiehler wrote:
> > > > diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
> > > > index 4204ffa2d104..4fc2c559f1b6 100644
> > > > --- a/arch/arm/mm/context.c
> > > > +++ b/arch/arm/mm/context.c
> > > > @@ -254,7 +254,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
> > > >             && atomic64_xchg(&per_cpu(active_asids, cpu), asid))
> > > >                 goto switch_mm_fastpath;
> > > > 
> > > > -       raw_spin_lock_irqsave(&cpu_asid_lock, flags);
> > > > +       local_irq_save(flags);
> > > > +       arch_spin_lock(&cpu_asid_lock.raw_lock);
> > > >         /* Check that our ASID belongs to the current generation. */
> > > >         asid = atomic64_read(&mm->context.id);
> > > >         if ((asid ^ atomic64_read(&asid_generation)) >> ASID_BITS) {
> > > > @@ -269,7 +270,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
> > > > 
> > > >         atomic64_set(&per_cpu(active_asids, cpu), asid);
> > > >         cpumask_set_cpu(cpu, mm_cpumask(mm));
> > > > -       raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
> > > > +       arch_spin_unlock(&cpu_asid_lock.raw_lock);
> > > > +       local_irq_restore(flags);
> > > > 
> > > >  switch_mm_fastpath:
> > > >         cpu_switch_mm(mm->pgd, mm);
> > > > 
> > > > @Russell, what do you think?
> > > 
> > > I think this is Will Deacon's code, so we ought to hear from Will...
> > 
> > Thanks for adding me in.
> > 
> > Using arch_spin_lock() really feels like a bodge to me. This code isn't
> > run only on the hot-unplug path, but rather this is part of switch_mm()
> > and we really should be able to have lockdep work properly there for
> > the usual case.
> > 
> > Now, do we actually need to worry about the ASID when switching to the
> > init_mm? I'd have thought that would be confined to global (kernel)
> > mappings, so I wonder whether we could avoid this slow path code
> > altogether like we do on arm64 in __switch_mm(). But I must confess that
> > I don't recall the details of the pre-LPAE MMU configuration...
> 
> As the init_mm shouldn't have any userspace mappings, isn't the ASID
> entirely redundant? Couldn't check_and_switch_context() just simply
> do the vmalloc seq check, set the reserved ASID, and then head to
> switch_mm_fastpath to call the mm switch code?

Right, that's what I was thinking too, but I have some distant memories
of the module space causing potential issues in some configurations. Does
that ring a bell with you?

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ