[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyXf370Ds6FH5wPPycZLSPD5qiP7rjWqxwDetfHfKjg1w@mail.gmail.com>
Date: Thu, 19 Sep 2013 19:02:20 -0500
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@....ibm.com>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
"H. Peter Anvin" <hpa@...or.com>,
James Hogan <james.hogan@...tec.com>,
"James E.J. Bottomley" <jejb@...isc-linux.org>,
Helge Deller <deller@....de>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
"David S. Miller" <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC GIT PULL] softirq: Consolidation and stack overrun fix
On Thu, Sep 19, 2013 at 2:51 PM, Frederic Weisbecker <fweisbec@...il.com> wrote:
>
> It fixes stacks overruns reported by Benjamin Herrenschmidt:
> http://lkml.kernel.org/r/1378330796.4321.50.camel%40pasglop
So I don't really dislike this patch-series, but isn't "irq_exit()"
(which calls the new softirq_on_stack()) already running in the
context of the irq stack? And it's run at the very end of the irq
processing, so the irq stack should be empty too at that point.
So switching to *another* empty stack sounds really sad. No? Taking
more cache misses etc, instead of using the already empty - but
cache-hot - stack that we already have.
I'm assuming that the problem is that since we're already on the irq
stack, if *another* irq comes in, now that *other* irq doesn't get yet
another irq stack page. And I'm wondering whether we shouldn't just
fix that (hopefully unlikely) case instead? So instead of having a
softirq stack, we'd have just an extra irq stack for the case where
the original irq stack is already in use.
Hmm?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists