[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrW29qzigJY99GSFg4EVabbh6VnUmfPvtwO+BtPKfKWj-Q@mail.gmail.com>
Date: Thu, 11 Oct 2018 13:47:49 -0700
From: Andy Lutomirski <luto@...nel.org>
To: One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>
Cc: Andrew Lutomirski <luto@...nel.org>,
Kristen Carlson Accardi <kristen@...ux.intel.com>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86: entry: flush the cache if syscall error
On Thu, Oct 11, 2018 at 1:25 PM Alan Cox <gnomes@...rguk.ukuu.org.uk> wrote:
>
> > Ugh.
> >
> > What exactly is this trying to protect against? And how many cycles
>
> Most attacks by speculation rely upon leaving footprints in the L1 cache.
> They also almost inevitably resolve non speculatively to errors. If you
> look through all the 'yet another potential spectre case' patches people
> have found they would have been rendered close to useless by this change.
Can you give an example? AFAIK the interesting Meltdown-like attacks
are unaffected because Meltdown doesn't actually need the target data
to be in L1D. And most of the Spectre-style attacks would have been
blocked by doing LFENCE on the error cases (and somehow making sure
that the CPU doesn't speculate around the LFENCE without noticing it).
But this patch is doing an L1D flush, which, as far as I've heard,
isn't actually relevant.
>
> It's a way to deal with the ones we don't know about, all the ones theion
> tools won't find and it has pretty much zero cost
>
> (If you are bored strace an entire days desktop session, bang it through
> a script or two to extract the number of triggerig error returns and do
> the maths...)
>
> > should we expect L1D_FLUSH to take?
>
> More to the point you pretty much never trigger it. Errors are not the
> normal path in real code. The original version of this code emptied the
> L1 the hard way - and even then it was in the noise for real workloads we
> tried.
>
> You can argue that the other thread could be some evil task that
> deliberately triggers flushes, but it can already thrash the L1 on
> processors that share L1 between threads using perfectly normal memory
> instructions.
>
That's not what I meant. I meant that, if an attacker can run code on
*both* logical threads on the same CPU, then they can run their attack
code on the other logical thread before the L1D_FLUSH command takes
effect.
I care about the performance of single-threaded workloads, though.
How slow is this thing? No one cares about syscall performance on
regular desktop sessions except for gaming. But people do care about
syscall performance on all kinds of crazy server, database, etc
workloads. And compilation. And HPC stuff, although that mostly
doesn't involve syscalls. So: benchmarks, please. And estimated
cycle counts, please, on at least a couple of relevant CPU
generations.
On Meltdown-affected CPUs, we're doing a CR3 write anyway, which is
fully serializing, so it's slow. But AFAIK that *already* blocks most
of these attacks except L1TF, and L1TF has (hopefully!) been fixed
anyway on Linux.
Powered by blists - more mailing lists