lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 6 Feb 2009 11:43:04 -0700
From:	Alex Chiang <achiang@...com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	tony.luck@...el.com, linux-ia64@...r.kernel.org,
	linux-kernel <linux-kernel@...r.kernel.org>, mingo@...e.hu
Subject: Re: [PATCH] ia64: prevent irq migration race in __cpu_disable path

* Paul E. McKenney <paulmck@...ux.vnet.ibm.com>:
> On Fri, Feb 06, 2009 at 11:07:42AM -0700, Alex Chiang wrote:
> > 
> > [removing stable@...nel.org for now while we figure this out]
> > 
> > * Paul E. McKenney <paulmck@...ux.vnet.ibm.com>:
> > > On Fri, Feb 06, 2009 at 09:22:13AM -0700, Alex Chiang wrote:
> > > > ---
> > > > In my opinion, this is .29 material.
> > > > 
> > > > Sorry for the huge changelog:patch ratio, but this area is tricky
> > > > enough that more explanation is better than less, I think.
> > > > 
> > > > Also, I'm still a little troubled by Paul's original patch. What
> > > > happens if we're trying to offline the CPEI target? The code in
> > > > migrate_platform_irqs() uses cpu_online_map to select the new
> > > > CPEI target, and it seems like we can end up in the same
> > > > situation as the problem I'm trying to fix now.
> > > > 
> > > > Paul?
> > > > 
> > > > My patch has held up for over 24 hours of stress testing, where
> > > > we put the system under a heavy load and then randomly
> > > > offline/online CPUs every 2 seconds. Without this patch, the
> > > > machine would crash reliably within 15 minutes.
> > > 
> > > I don't claim much expertise on IA64 low-level architectural details,
> > 
> > I'm starting to get a bit out of my depth here too... :-/
> > 
> > > so am reduced to asking the usual question...  Does this patch guarantee
> > > that a given CPU won't be executing irq handlers while marked offline?
> > > If there is no such guarantee, things can break.  (See below.)
> > 
> > My patch makes no guarantee. What it does do is prevent a NULL
> > deref while we are, in fact, executing an irq handler while
> > marked offline.
> > 
> > > In any case, apologies for failing to correctly fix the original
> > > problem!!!
> > 
> > I'm curious, reading through your old change log:
> > 
> >     Make ia64 refrain from clearing a given to-be-offlined CPU's
> >     bit in the cpu_online_mask until it has processed pending
> >     irqs.  This change prevents other CPUs from being blindsided
> >     by an apparently offline CPU nevertheless changing globally
> >     visible state. 
> > 
> > Was your patch fixing a theoretical problem or a real bug? What
> > globally visible state were you referencing there?
> > 
> > > > ---
> > > > 
> > > > diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
> > > > index 1146399..2a17d1c 100644
> > > > --- a/arch/ia64/kernel/smpboot.c
> > > > +++ b/arch/ia64/kernel/smpboot.c
> > > > @@ -742,8 +742,8 @@ int __cpu_disable(void)
> > > >  	}
> > > > 
> > > >  	remove_siblinginfo(cpu);
> > > > -	fixup_irqs();
> > > >  	cpu_clear(cpu, cpu_online_map);
> > > > +	fixup_irqs();
> > > 
> > > So you argument is that because we are running in the context of
> > > stop_machine(), even though fixup_irqs() does in fact cause irq handlers
> > > to run on the current CPU which is marked offline, the fact that there
> > > is no one running to notice this misbehavior makes it OK?  (Which
> > > perhaps it is, just asking the question.)
> > 
> > I wouldn't say that I have a solid argument, per se, just fixing
> > symptoms. ;)
> > 
> > My reading of the cpu_down() path makes it seem like we need to
> > process pending interrupts on the current CPU, and the original
> > author certainly thought it was ok to call an irq handler on the
> > current CPU. We don't disable local irqs until the very last step
> > of fixup_irqs().
> > 
> > So the actual design of this path assumed it was ok to call an
> > irq handler on a marked-offline CPU.
> > 
> > Can you educate me on the danger of doing such a thing? That
> > might help in how I interpret the code.
> 
> Well, RCU happily ignores CPUs that don't have their bits set in
> cpu_online_map, so if there are RCU read-side critical sections in the
> irq handlers being run, RCU will ignore them.  If the other CPUs were
> running, they might sequence through the RCU state machine, which could
> result in data structures being yanked out from under those irq handlers,
> which in turn could result in oopses or worse.

Ok, that makes sense.

As I'm continuing to dig, I took a look at the x86 side of the 
house and they have this interesting sequence:

cpu_disable_common()
	remove_cpu_from_maps()  /* remove cpu from online map */
	fixup_irqs()
		[break irq-CPU affinity]

		local_irq_enable();
		mdelay(1);
		local_irq_disable();
                
So in x86, we allow interrupt handlers to run on a CPU that's 
already been removed from the online map.

Does that seem like an analagous situation to what we have in
ia64?

Thanks.

/ac, still digging

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ