lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 08 Jan 2018 21:38:37 +0000
From:   David Woodhouse <dwmw2@...radead.org>
To:     Andi Kleen <andi@...stfloor.org>
Cc:     pjt@...gle.com, linux-kernel@...r.kernel.org,
        gregkh@...ux-foundation.org, tim.c.chen@...ux.intel.com,
        dave.hansen@...el.com, tglx@...utronix.de, luto@...capital.net,
        Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH] x86/retpoline: Avoid return buffer underflows on
 context switch

On Mon, 2018-01-08 at 12:15 -0800, Andi Kleen wrote:
> From: Andi Kleen <ak@...ux.intel.com>
> 
> [This is on top of David's retpoline branch, as of 08-01 this morning]
> 
> This patch further hardens retpoline
> 
> CPUs have return buffers which store the return address for
> RET to predict function returns. Some CPUs (Skylake, some Broadwells)
> can fall back to indirect branch prediction on return buffer underflow.
> 
> With retpoline we want to avoid uncontrolled indirect branches,
> which could be poisoned by ring 3, so we need to avoid uncontrolled
> return buffer underflows in the kernel.
> 
> This can happen when we're context switching from a shallower to a
> deeper kernel stack.  The deeper kernel stack would eventually underflow
> the return buffer, which again would fall back to the indirect branch predictor.
> 
> To guard against this fill the return buffer with controlled
> content during context switch. This prevents any underflows.
> 
> We always fill the buffer with 30 entries: 32 minus 2 for at
> least one call from entry_{64,32}.S to C code and another into
> the function doing the filling.
> 
> That's pessimistic because we likely did more controlled kernel calls.
> So in principle we could do less.  However it's hard to maintain such an
> invariant, and it may be broken with more aggressive compilers.
> So err on the side of safety and always fill 30.
> 
> Signed-off-by: Andi Kleen <ak@...ux.intel.com>

Thanks.

Acked-by: David Woodhouse <dwmw@...zon.co.uk>

We want this on vmexit too, right? And the IBRS/IBPB patch set is going
to want to do similar things. But picking the RSB stuffing out of that
patch set and putting it in with the retpoline support is absolutely
the right thing to do.

Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5213 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ