lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+CK2bA22AP2jrbHjdN8nYFbYX2xJXQt+=4G3Rjw_Lyn5NOyKA@mail.gmail.com>
Date: Mon, 11 Mar 2024 19:10:23 -0400
From: Pasha Tatashin <pasha.tatashin@...een.com>
To: Andy Lutomirski <luto@...nel.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, linux-mm@...ck.org, 
	Andrew Morton <akpm@...ux-foundation.org>, "the arch/x86 maintainers" <x86@...nel.org>, 
	Borislav Petkov <bp@...en8.de>, Christian Brauner <brauner@...nel.org>, bristot@...hat.com, 
	Ben Segall <bsegall@...gle.com>, Dave Hansen <dave.hansen@...ux.intel.com>, dianders@...omium.org, 
	dietmar.eggemann@....com, eric.devolder@...cle.com, hca@...ux.ibm.com, 
	"hch@...radead.org" <hch@...radead.org>, "H. Peter Anvin" <hpa@...or.com>, 
	Jacob Pan <jacob.jun.pan@...ux.intel.com>, Jason Gunthorpe <jgg@...pe.ca>, jpoimboe@...nel.org, 
	Joerg Roedel <jroedel@...e.de>, juri.lelli@...hat.com, 
	Kent Overstreet <kent.overstreet@...ux.dev>, kinseyho@...gle.com, 
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, lstoakes@...il.com, mgorman@...e.de, 
	mic@...ikod.net, michael.christie@...cle.com, Ingo Molnar <mingo@...hat.com>, 
	mjguzik@...il.com, "Michael S. Tsirkin" <mst@...hat.com>, Nicholas Piggin <npiggin@...il.com>, 
	"Peter Zijlstra (Intel)" <peterz@...radead.org>, Petr Mladek <pmladek@...e.com>, 
	Rick P Edgecombe <rick.p.edgecombe@...el.com>, Steven Rostedt <rostedt@...dmis.org>, 
	Suren Baghdasaryan <surenb@...gle.com>, Thomas Gleixner <tglx@...utronix.de>, 
	Uladzislau Rezki <urezki@...il.com>, vincent.guittot@...aro.org, vschneid@...hat.com
Subject: Re: [RFC 11/14] x86: add support for Dynamic Kernel Stacks

On Mon, Mar 11, 2024 at 6:17 PM Andy Lutomirski <luto@...nel.org> wrote:
>
>
>
> On Mon, Mar 11, 2024, at 9:46 AM, Pasha Tatashin wrote:
> > Add dynamic_stack_fault() calls to the kernel faults, and also declare
> > HAVE_ARCH_DYNAMIC_STACK = y, so that dynamic kernel stacks can be
> > enabled on x86 architecture.
> >
> > Signed-off-by: Pasha Tatashin <pasha.tatashin@...een.com>
> > ---
> >  arch/x86/Kconfig        | 1 +
> >  arch/x86/kernel/traps.c | 3 +++
> >  arch/x86/mm/fault.c     | 3 +++
> >  3 files changed, 7 insertions(+)
> >
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index 5edec175b9bf..9bb0da3110fa 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -197,6 +197,7 @@ config X86
> >       select HAVE_ARCH_USERFAULTFD_WP         if X86_64 && USERFAULTFD
> >       select HAVE_ARCH_USERFAULTFD_MINOR      if X86_64 && USERFAULTFD
> >       select HAVE_ARCH_VMAP_STACK             if X86_64
> > +     select HAVE_ARCH_DYNAMIC_STACK          if X86_64
> >       select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
> >       select HAVE_ARCH_WITHIN_STACK_FRAMES
> >       select HAVE_ASM_MODVERSIONS
> > diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> > index c3b2f863acf0..cc05401e729f 100644
> > --- a/arch/x86/kernel/traps.c
> > +++ b/arch/x86/kernel/traps.c
> > @@ -413,6 +413,9 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
> >       }
> >  #endif
> >
> > +     if (dynamic_stack_fault(current, address))
> > +             return;
> > +
>
> Sorry, but no, you can't necessarily do this.  I say this as the person who write this code, and I justified my code on the basis that we are not recovering -- we're jumping out to a different context, and we won't crash if the origin context for the fault is corrupt.  The SDM is really quite unambiguous about it: we're in an "abort" context, and returning is not allowed  And I this may well be is the real deal -- the microcode does not promise to have the return frame and the actual faulting context matched up here, and there's is no architectural guarantee that returning will do the right thing.
>
> Now we do have some history of getting a special exception, e.g. for espfix64.  But espfix64 is a very special case, and the situation you're looking at is very general.  So unless Intel and AMD are both wiling to publicly document that it's okay to handle stack overflow, where any instruction in the ISA may have caused the overflow, like this, then we're not going to do it.

Hi Andy,

Thank you for the insightful feedback.

I'm somewhat confused about why we end up in exc_double_fault() in the
first place. My initial assumption was that dynamic_stack_fault()
would only be needed within do_kern_addr_fault(). However, while
testing in QEMU, I found that when using memset() on a stack variable,
code like this:

rep stos %rax,%es:(%rdi)

causes a double fault instead of a regular fault. I added it to
exc_double_fault() as a result, but I'm curious if you have any
insights into why this behavior occurs.

> There are some other options: you could pre-map

Pre-mapping would be expensive. It would mean pre-mapping the dynamic
pages for every scheduled thread, and we'd still need to check the
access bit every time a thread leaves the CPU. Dynamic thread faults
should be considered rare events and thus shouldn't significantly
affect the performance of normal context switch operations. With 8K
stacks, we might encounter only 0.00001% of stacks requiring an extra
page, and even fewer needing 16K.

> Also, I think the whole memory allocation concept in this whole series is a bit odd.  Fundamentally, we *can't* block on these stack faults -- we may be in a context where blocking will deadlock.  We may be in the page allocator.  Panicing due to kernel stack allocation  would be very unpleasant.

We never block during handling stack faults. There's a per-CPU page
pool, guaranteeing availability for the faulting thread. The thread
simply takes pages from this per-CPU data structure and refills the
pool when leaving the CPU. The faulting routine is efficient,
requiring a fixed number of loads without any locks, stalling, or even
cmpxchg operations.

> But perhaps we could have a rule that a task can only be scheduled in if there is sufficient memory available for its stack.

Yes, I've considered this as well. We might implement this to avoid
crashes due to page faults. Basically, if the per-CPU pool cannot be
refilled, we'd prevent task scheduling until it is. We're already so
short on memory that the kernel can't allocate up to 3 pages of
memory.

Thank you,
Pasha

>  And perhaps we could avoid every page-faulting by filling in the PTEs for the potential stack pages but leaving them un-accessed.  I *think* that all x86 implementations won't fill the TLB for a non-accessed page without also setting the accessed bit, so the performance hit of filling the PTEs, running the task, and then doing the appropriate synchronization to clear the PTEs and read the accessed bit on schedule-out to release the pages may not be too bad.  But you would need to do this cautiously in the scheduler, possibly in the *next* task but before the prev task is actually released enough to be run on a different CPU.  It's going to be messy.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ