lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141026202943.GA9871@lerouge>
Date:	Sun, 26 Oct 2014 21:29:47 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Andy Lutomirski <luto@...capital.net>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Richard Weinberger <richard.weinberger@...il.com>,
	Ingo Molnar <mingo@...nel.org>
Subject: Re: vmalloced stacks on x86_64?

On Sat, Oct 25, 2014 at 10:49:25PM -0700, Andy Lutomirski wrote:
> On Oct 25, 2014 9:11 PM, "Frederic Weisbecker" <fweisbec@...il.com> wrote:
> >
> > 2014-10-25 2:22 GMT+02:00 Andy Lutomirski <luto@...capital.net>:
> > > Is there any good reason not to use vmalloc for x86_64 stacks?
> > >
> > > The tricky bits I've thought of are:
> > >
> > >  - On any context switch, we probably need to probe the new stack
> > > before switching to it.  That way, if it's going to fault due to an
> > > out-of-sync pgd, we still have a stack available to handle the fault.
> >
> > Would that prevent from any further fault on a vmalloc'ed kernel
> > stack? We would need to ensure that pre-faulting, say the first byte,
> > is enough to sync the whole new stack entirely otherwise we risk
> > another future fault and some places really aren't safely faulted.
> >
> 
> I think so.  The vmalloc faults only happen when the entire top-level
> page table entry is missing, and those cover giant swaths of address
> space.
> 
> I don't know whether the vmalloc code guarantees not to span a pmd
> (pud? why couldn't these be called pte0, pte1, pte2, etc.?) boundary.

So dereferencing stack[0] is probably enough for 8KB worth of stack. I think
we have vmalloc_sync_all() but I heard this only work on x86-64.

Too bad we don't have a universal solution, I have that problem with per cpu allocated
memory faulting at random places. I hit at least two places where it got harmful:
context tracking and perf callchains. We fixed the latter using open-coded per cpu
allocation. I still haven't found a solution for context tracking.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ