lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Sep 2013 11:26:05 -0500
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Paul Mackerras <paulus@....ibm.com>,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	James Hogan <james.hogan@...tec.com>,
	"James E.J. Bottomley" <jejb@...isc-linux.org>,
	Helge Deller <deller@....de>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	"David S. Miller" <davem@...emloft.net>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC GIT PULL] softirq: Consolidation and stack overrun fix

On Fri, Sep 20, 2013 at 01:03:17PM +0200, Thomas Gleixner wrote:
> On Thu, 19 Sep 2013, Linus Torvalds wrote:
> 
> > On Thu, Sep 19, 2013 at 2:51 PM, Frederic Weisbecker <fweisbec@...il.com> wrote:
> > >
> > > It fixes stacks overruns reported by Benjamin Herrenschmidt:
> > > http://lkml.kernel.org/r/1378330796.4321.50.camel%40pasglop
> > 
> > So I don't really dislike this patch-series, but isn't "irq_exit()"
> > (which calls the new softirq_on_stack()) already running in the
> > context of the irq stack? And it's run at the very end of the irq
> > processing, so the irq stack should be empty too at that point.
> 
> Right, but most of the implementations are braindamaged.
> 
>       irq_enter();
>       handle_irq_on_hardirq_stack();
>       irq_exit();
> 
> instead of doing:
> 	
>       switch_stack()
>       irq_enter()
>       handle_irq()
>       irq_exit()
>       restore_stack()
> 
> So in the case of softirq processing (the likely case) we end up doing:
> 
>    switch_to_hardirq_stack()
>    ...
>    restore_original_stack()
>    switch_to_softirq_stack()
>    ...
>    restore_original_stack()
> 
> Two avoidable stack switch operations for no gain.
> 
> > I'm assuming that the problem is that since we're already on the irq
> > stack, if *another* irq comes in, now that *other* irq doesn't get yet
> > another irq stack page. And I'm wondering whether we shouldn't just
> > fix that (hopefully unlikely) case instead? So instead of having a
> > softirq stack, we'd have just an extra irq stack for the case where
> > the original irq stack is already in use.
> 
> Why not have a single irq_stack large enough to accomodate interrupt
> handling during softirq processing? We have no interrupt nesting so
> the maximum stack depth necessary is
> 
>   max(softirq_stack_usage) + max(irq_stack_usage)
> 
> Today we allocate THREAD_SIZE_ORDER for the hard and the soft context,
> so allocating 2 * THREAD_SIZE_ORDER should be sufficient.

Looks good to me.

Now just for clarity, what do we then do with inline sofirq executions: on local_bh_enable()
for example, or explicit calls to do_softirq() other than irq exit?

Should we keep the current switch to a different softirq stack? If we have a generic irq stack
(used for both hard and soft) that is big enough, perhaps we can also switch to this
generic irq stack for inline softirqs executions? After all there is no much point in keeping
a separate stack for that: this result in cache misses if the inline softirq is interrupted by
a hardirq, also inlined softirqs can't happen in hardirq, so there should be no much risk of overruns.

Or am I missing other things?

> 
> Thanks,
> 
> 	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ