[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <77F59E25-5244-4CBC-A3CB-DCF863803CD2@amacapital.net>
Date: Fri, 12 Oct 2018 07:43:09 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Alan Cox <gnomes@...rguk.ukuu.org.uk>
Cc: Andy Lutomirski <luto@...nel.org>,
Kees Cook <keescook@...omium.org>,
Kristen Carlson Accardi <kristen@...ux.intel.com>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86: entry: flush the cache if syscall error
On Oct 12, 2018, at 7:25 AM, Alan Cox <gnomes@...rguk.ukuu.org.uk> wrote:
>> But this really needs to be clarified. Alan said that a bunch of the
>> "yet another Spectre variant" attacks would have been mitigated by
>> this patch. An explanation of *how* would be in order.
>
> Today you have the situation where something creates a speculative
> disclosure gadget. So we run around and we try and guess where to fix
> them all with lfence. If you miss one then it leaves a trace in the L1D
> cache, which is what you measure.
>
> In almost every case we have looked at when you leave a footprint in the
> L1D you resolve to an error path so the syscall errors.
>
> In other words every time we fail to find a
>
> if (foo < limit) {
> gadget(array[foo]);
> } else
> return -EINVAL;
>
> we turn that from being an easy to use gadget into something really
> tricky because by the time the code flow has gotten back to the caller
> the breadcrumbs have been eaten by the L1D flush.
My understanding is that the standard “breadcrumb” is that a cache line is fetched into L1D, and that the cacheline in question will go into L1D even if it was previously not cached at all. So flushing L1D will cause the timing from a probe to be different, but the breadcrumb is still there, and the attack will still work.
Am I wrong?
>
>
> At best you have a microscopic window to attack it on the SMT pair.
So only the extra clever attackers will pull it off. This isn’t all that reassuring.
If I had the time to try to break this, I would set it up so that the cache lines that get probed are cached remotely, and I’d spin, waiting until one of them gets stolen. The spin loop would be much faster this way.
Or I would get a fancy new CPU and use UMONITOR and, unless UMONITOR is much cleverer than I suspect it is, the gig is up. The time window for the attack could be as small as you want, and UMONITOR will catch it.
(Have I mentioned that I think that Intel needs to remove UMONITOR, PCOMMIT-style? That instruction is simply the wrong solution to whatever problem it’s trying to solve.)
>
> Alan
Powered by blists - more mailing lists