[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1f97f1d9-d209-f2ab-406d-fac765006f91@oracle.com>
Date: Fri, 12 Jul 2019 14:17:20 +0200
From: Alexandre Chartre <alexandre.chartre@...cle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: pbonzini@...hat.com, rkrcmar@...hat.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, kvm@...r.kernel.org,
x86@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
konrad.wilk@...cle.com, jan.setjeeilers@...cle.com,
liran.alon@...cle.com, jwadams@...gle.com, graf@...zon.de,
rppt@...ux.vnet.ibm.com, Paul Turner <pjt@...gle.com>
Subject: Re: [RFC v2 00/27] Kernel Address Space Isolation
On 7/12/19 1:44 PM, Peter Zijlstra wrote:
> On Thu, Jul 11, 2019 at 04:25:12PM +0200, Alexandre Chartre wrote:
>> Kernel Address Space Isolation aims to use address spaces to isolate some
>> parts of the kernel (for example KVM) to prevent leaking sensitive data
>> between hyper-threads under speculative execution attacks. You can refer
>> to the first version of this RFC for more context:
>>
>> https://lkml.org/lkml/2019/5/13/515
>
> No, no, no!
>
> That is the crux of this entire series; you're not punting on explaining
> exactly why we want to go dig through 26 patches of gunk.
>
> You get to exactly explain what (your definition of) sensitive data is,
> and which speculative scenarios and how this approach mitigates them.
>
> And included in that is a high level overview of the whole thing.
>
Ok, I will rework the explanation. Sorry about that.
> On the one hand you've made this implementation for KVM, while on the
> other hand you're saying it is generic but then fail to describe any
> !KVM user.
>
> AFAIK all speculative fails this is relevant to are now public, so
> excruciating horrible details are fine and required.
Ok.
> AFAIK2 this is all because of MDS but it also helps with v1.
Yes, mostly MDS and also L1TF.
> AFAIK3 this wants/needs to be combined with core-scheduling to be
> useful, but not a single mention of that is anywhere.
No. This is actually an alternative to core-scheduling. Eventually, ASI
will kick all sibling hyperthreads when exiting isolation and it needs to
run with the full kernel page-table (note that's currently not in these
patches).
So ASI can be seen as an optimization to disabling hyperthreading: instead
of just disabling hyperthreading you run with ASI, and when ASI can't preserve
isolation you will basically run with a single thread.
I will add all that to the explanation.
Thanks,
alex.
Powered by blists - more mailing lists