lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1466082695.15275.6.camel@redhat.com>
Date:	Thu, 16 Jun 2016 09:11:35 -0400
From:	Rik van Riel <riel@...hat.com>
To:	kernel-hardening@...ts.openwall.com,
	Mika Penttilä <mika.penttila@...tfour.com>
Cc:	Nadav Amit <nadav.amit@...il.com>,
	Kees Cook <keescook@...omium.org>,
	Josh Poimboeuf <jpoimboe@...hat.com>,
	Borislav Petkov <bp@...en8.de>,
	Brian Gerst <brgerst@...il.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	X86 ML <x86@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [kernel-hardening] Re: [PATCH 12/13] x86/mm/64: Enable vmapped
 stacks

On Wed, 2016-06-15 at 22:33 -0700, Andy Lutomirski wrote:
> 
> > > +++ b/arch/x86/mm/tlb.c
> > > @@ -77,10 +77,25 @@ void switch_mm_irqs_off(struct mm_struct
> > > *prev, struct mm_struct *next,
> > >       unsigned cpu = smp_processor_id();
> > > 
> > >       if (likely(prev != next)) {
> > > +             if (IS_ENABLED(CONFIG_VMAP_STACK)) {
> > > +                     /*
> > > +                      * If our current stack is in vmalloc space
> > > and isn't
> > > +                      * mapped in the new pgd, we'll double-
> > > fault.  Forcibly
> > > +                      * map it.
> > > +                      */
> > > +                     unsigned int stack_pgd_index =
> > > +                             pgd_index(current_stack_pointer());
> > 
> > stack pointer is still the previous task's, current_stack_pointer()
> > returns that, not
> > next task's which was intention I guess. Things may happen to work
> > if on same pgd, but at least the
> > boot cpu init_task_struct is special.
> This is intentional.  When switching processes, we first switch the
> mm
> and then switch the task.  We need to make sure that the prev stack
> is
> mapped in the new mm or we'll double-fault and die after switching
> the
> mm which still trying to execute on the old stack.
> 
> The change to switch_to makes sure that the new stack is mapped.
> 

On a HARDENED_USERCOPY tangential note: by not allowing
copy_to/from_user access to vmalloc memory by default,
with exception of the stack, a task will only be able
to copy_to/from_user from its own stack, not another task's
stack, at least using the kernel virtual address the
kernel uses to access that stack.

This can be accomplished by simply not adding any vmalloc
checking code to the current HARDENED_USERCOPY patch set :)

-- 
All rights reversed

Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ