lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d70fd98-ca47-47af-b5b1-064435ba77f1@citrix.com>
Date:   Sat, 1 May 2021 18:35:23 +0100
From:   Andrew Cooper <andrew.cooper3@...rix.com>
To:     Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        David Kaplan <David.Kaplan@....com>,
        David Woodhouse <dwmw2@...radead.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Kees Cook <keescook@...omium.org>,
        Jann Horn <jannh@...gle.com>,
        xen-devel <xen-devel@...ts.xenproject.org>
Subject: Re: Do we need to do anything about "dead µops?"

On 01/05/2021 17:26, Andy Lutomirski wrote:
> Hi all-
>
> The "I See Dead µops" paper that is all over the Internet right now is
> interesting, and I think we should discuss the extent to which we
> should do anything about it.  I think there are two separate issues:
>
> First, should we (try to) flush the µop cache across privilege
> boundaries?  I suspect we could find ways to do this, but I don't
> really see the point.  A sufficiently capable attacker (i.e. one who
> can execute their own code in the dangerous speculative window or one
> who can find a capable enough string of gadgets) can put secrets into
> the TLB, various cache levels, etc.  The µop cache is a nice piece of
> analysis, but I don't think it's qualitatively different from anything
> else that we don't flush.  Am I wrong?
>
> Second, the authors claim that their attack works across LFENCE.  I
> think that this is what's happening:
>
> load secret into %rax
> lfence
> call/jmp *%rax
>
> As I understand it, on CPUs from all major vendors, the call/jmp will
> gladly fetch before lfence retires, but the address from which it
> fetches will come from the branch predictors, not from the
> speculatively computed value in %rax.

The vendor-provided literature on pipelines (primarily, the optimisation
guides) has the register file down by the execute units, and not near
the frontend.  Letting the frontend have access to the register value is
distinctly non-trivial.

> So this is nothing new.  If the
> kernel leaks a secret into the branch predictors, we have already
> mostly lost, although we have a degree of protection from the various
> flushes we do.  In other words, if we do:
>
> load secret into %rax
> <-- non-speculative control flow actually gets here
> lfence
> call/jmp *%rax
>
> then we will train our secret into the predictors but also leak it
> into icache, TLB, etc, and we already lose.  We shouldn't be doing
> this in a way that matters.  But, if we do:
>
> mispredicted branch
> load secret into %rax
> <-- this won't retire because the branch was mispredicted
> lfence
> <-- here we're fetching but not dispatching
> call/jmp *%rax
>
> then the leak does not actually occur unless we already did the
> problematic scenario above.
>
> So my tentative analysis is that no action on Linux's part is required.
>
> What do you all think?

Everything here seems to boil down to managing to encode the secret in
the branch predictor state, then managing to recover it via the uop
cache sidechannel.

It is well known and generally understood that once your secret is in
the branch predictor, YHAL.  Code with that property was broken before
this paper, is still broken after this paper, and needs fixing irrespective.

Viewed in these terms, I don't see what security improvement is gained
from trying to flush the uop cache.

I tentatively agree with your conclusion, that no specific action
concerning the uop cache is required.

~Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ