[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1904070803200.1840@nanos.tec.linutronix.de>
Date: Sun, 7 Apr 2019 08:08:56 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Andy Lutomirski <luto@...nel.org>
cc: LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [patch V2 28/29] x86/irq/64: Remap the IRQ stack with guard
pages
On Sat, 6 Apr 2019, Andy Lutomirski wrote:
> On Fri, Apr 5, 2019 at 8:11 AM Thomas Gleixner <tglx@...utronix.de> wrote:
> >
> > From: Andy Lutomirski <luto@...nel.org>
> >
> > The IRQ stack lives in percpu space, so an IRQ handler that overflows it
> > will overwrite other data structures.
> >
> > Use vmap() to remap the IRQ stack so that it will have the usual guard
> > pages that vmap/vmalloc allocations have. With this the kernel will panic
> > immediately on an IRQ stack overflow.
>
> The 0day bot noticed that this dies with DEBUG_PAGEALLOC on. This is
> because the store_stackinfo() function is utter garbage and this patch
> correctly detects just how broken it is. The attached patch "fixes"
> it. (It also contains a reliability improvement that should probably
> get folded in, but is otherwise unrelated.)
>
> A real fix would remove the generic kstack_end() function entirely
> along with __HAVE_ARCH_KSTACK_END and would optionally replace
> store_stackinfo() with something useful. Josh, do we have a generic
> API to do a little stack walk like this? Otherwise, I don't think it
> would be the end of the world to just remove the offending code.
Yes, I found the same yesterday before heading out. It's already broken
with the percpu stack because there is no guarantee that the per cpu stack
is thread size aligned. It's guaranteed to be page aligned not more.
I'm all for removing that nonsense, but the real question is whether there
is more code which assumes THREAD_SIZE aligned stacks aside of the thread
stack itself.
Thanks,
tglx
Powered by blists - more mailing lists