[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190714171127.GA15645@rapoport-lnx>
Date: Sun, 14 Jul 2019 20:11:29 +0300
From: Mike Rapoport <rppt@...ux.ibm.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: Alexandre Chartre <alexandre.chartre@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...el.com>, pbonzini@...hat.com,
rkrcmar@...hat.com, mingo@...hat.com, bp@...en8.de, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, kvm@...r.kernel.org,
x86@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
konrad.wilk@...cle.com, jan.setjeeilers@...cle.com,
liran.alon@...cle.com, jwadams@...gle.com, graf@...zon.de,
rppt@...ux.vnet.ibm.com, Paul Turner <pjt@...gle.com>
Subject: Re: [RFC v2 00/27] Kernel Address Space Isolation
On Fri, Jul 12, 2019 at 10:45:06AM -0600, Andy Lutomirski wrote:
>
>
> > On Jul 12, 2019, at 10:37 AM, Alexandre Chartre <alexandre.chartre@...cle.com> wrote:
> >
> >
> >
> >> On 7/12/19 5:16 PM, Thomas Gleixner wrote:
> >>> On Fri, 12 Jul 2019, Peter Zijlstra wrote:
> >>>> On Fri, Jul 12, 2019 at 01:56:44PM +0200, Alexandre Chartre wrote:
> >>>>
> >>>> I think that's precisely what makes ASI and PTI different and independent.
> >>>> PTI is just about switching between userland and kernel page-tables, while
> >>>> ASI is about switching page-table inside the kernel. You can have ASI without
> >>>> having PTI. You can also use ASI for kernel threads so for code that won't
> >>>> be triggered from userland and so which won't involve PTI.
> >>>
> >>> PTI is not mapping kernel space to avoid speculation crap (meltdown).
> >>> ASI is not mapping part of kernel space to avoid (different) speculation crap (MDS).
> >>>
> >>> See how very similar they are?
> >>>
> >>> Furthermore, to recover SMT for userspace (under MDS) we not only need
> >>> core-scheduling but core-scheduling per address space. And ASI was
> >>> specifically designed to help mitigate the trainwreck just described.
> >>>
> >>> By explicitly exposing (hopefully harmless) part of the kernel to MDS,
> >>> we reduce the part that needs core-scheduling and thus reduce the rate
> >>> the SMT siblngs need to sync up/schedule.
> >>>
> >>> But looking at it that way, it makes no sense to retain 3 address
> >>> spaces, namely:
> >>>
> >>> user / kernel exposed / kernel private.
> >>>
> >>> Specifically, it makes no sense to expose part of the kernel through MDS
> >>> but not through Meltdow. Therefore we can merge the user and kernel
> >>> exposed address spaces.
> >>>
> >>> And then we've fully replaced PTI.
> >>>
> >>> So no, they're not orthogonal.
> >> Right. If we decide to expose more parts of the kernel mappings then that's
> >> just adding more stuff to the existing user (PTI) map mechanics.
> >
> > If we expose more parts of the kernel mapping by adding them to the existing
> > user (PTI) map, then we only control the mapping of kernel sensitive data but
> > we don't control user mapping (with ASI, we exclude all user mappings).
> >
> > How would you control the mapping of userland sensitive data and exclude them
> > from the user map?
>
> As I see it, if we think part of the kernel is okay to leak to VM guests,
> then it should think it’s okay to leak to userspace and versa. At the end
> of the day, this may just have to come down to an administrator’s choice
> of how careful the mitigations need to be.
>
> > Would you have the application explicitly identify sensitive
> > data (like Andy suggested with a /dev/xpfo device)?
>
> That’s not really the intent of my suggestion. I was suggesting that
> maybe we don’t need ASI at all if we allow VMs to exclude their memory
> from the kernel mapping entirely. Heck, in a setup like this, we can
> maybe even get away with turning PTI off under very, very controlled
> circumstances. I’m not quite sure what to do about the kernel random
> pools, though.
I think KVM already allows excluding VMs memory from the kernel mapping
with the "new guest mapping interface" [1]. The memory managed by the host
can be restricted with "mem=" and KVM maps/unmaps the guest memory pages
only when needed.
It would be interesting to see if /dev/xpfo or even
madvise(MAKE_MY_MEMORY_PRIVATE) can be made useful for multi-tenant
container hosts.
[1] https://lore.kernel.org/lkml/1548966284-28642-1-git-send-email-karahmed@amazon.de/
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists