[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXph3Zg907kWTn6gAsZVsPbCB3A2XuNf0hy5Ez2jm2aNQ@mail.gmail.com>
Date:   Mon, 17 Jun 2019 08:54:36 -0700
From:   Andy Lutomirski <luto@...nel.org>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Alexander Graf <graf@...zon.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Marius Hillenbrand <mhillenb@...zon.de>,
        kvm list <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Kernel Hardening <kernel-hardening@...ts.openwall.com>,
        Linux-MM <linux-mm@...ck.org>, Alexander Graf <graf@...zon.de>,
        David Woodhouse <dwmw@...zon.co.uk>,
        "the arch/x86 maintainers" <x86@...nel.org>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [RFC 00/10] Process-local memory allocations for hiding KVM secrets
On Mon, Jun 17, 2019 at 8:50 AM Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 6/17/19 12:38 AM, Alexander Graf wrote:
> >> Yes I know, but as a benefit we could get rid of all the GSBASE
> >> horrors in
> >> the entry code as we could just put the percpu space into the local PGD.
> >
> > Would that mean that with Meltdown affected CPUs we open speculation
> > attacks against the mmlocal memory from KVM user space?
>
> Not necessarily.  There would likely be a _set_ of local PGDs.  We could
> still have pair of PTI PGDs just like we do know, they'd just be a local
> PGD pair.
>
Unfortunately, this would mean that we need to sync twice as many
top-level entries when we context switch.
Powered by blists - more mailing lists
 
