[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e8a4cac5-bc5a-4483-9443-c0e5b9f707d1@www.fastmail.com>
Date: Tue, 12 Apr 2022 17:16:22 -0700
From: "Andy Lutomirski" <luto@...nel.org>
To: "Vishal Annapurve" <vannapurve@...gle.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
"kvm list" <kvm@...r.kernel.org>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
linux-kselftest@...r.kernel.org
Cc: "Paolo Bonzini" <pbonzini@...hat.com>,
"Vitaly Kuznetsov" <vkuznets@...hat.com>,
"Wanpeng Li" <wanpengli@...cent.com>,
"Jim Mattson" <jmattson@...gle.com>,
"Joerg Roedel" <joro@...tes.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Ingo Molnar" <mingo@...hat.com>, "Borislav Petkov" <bp@...en8.de>,
"Dave Hansen" <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, shauh@...nel.org,
yang.zhong@...el.com, drjones@...hat.com, ricarkol@...gle.com,
aaronlewis@...gle.com, wei.w.wang@...el.com,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
"Jonathan Corbet" <corbet@....net>,
"Hugh Dickins" <hughd@...gle.com>,
"Jeff Layton" <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
"Andrew Morton" <akpm@...ux-foundation.org>,
"Chao Peng" <chao.p.peng@...ux.intel.com>,
"Yu Zhang" <yu.c.zhang@...ux.intel.com>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
"Dave Hansen" <dave.hansen@...el.com>,
"Michael Roth" <michael.roth@....com>,
"Quentin Perret" <qperret@...gle.com>,
"Steven Price" <steven.price@....com>,
"Andi Kleen" <ak@...ux.intel.com>,
"David Hildenbrand" <david@...hat.com>,
"Vlastimil Babka" <vbabka@...e.cz>,
"Marc Orr" <marcorr@...gle.com>,
"Erdem Aktas" <erdemaktas@...gle.com>,
"Peter Gonda" <pgonda@...gle.com>,
"Sean Christopherson" <seanjc@...gle.com>, diviness@...gle.com,
"Quentin Perret" <qperret@...gle.com>
Subject: Re: [RFC V1 PATCH 0/5] selftests: KVM: selftests for fd-based approach of
supporting private memory
On Fri, Apr 8, 2022, at 2:05 PM, Vishal Annapurve wrote:
> This series implements selftests targeting the feature floated by Chao
> via:
> https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/
>
> Below changes aim to test the fd based approach for guest private memory
> in context of normal (non-confidential) VMs executing on non-confidential
> platforms.
>
> Confidential platforms along with the confidentiality aware software
> stack support a notion of private/shared accesses from the confidential
> VMs.
> Generally, a bit in the GPA conveys the shared/private-ness of the
> access. Non-confidential platforms don't have a notion of private or
> shared accesses from the guest VMs. To support this notion,
> KVM_HC_MAP_GPA_RANGE
> is modified to allow marking an access from a VM within a GPA range as
> always shared or private. Any suggestions regarding implementing this ioctl
> alternatively/cleanly are appreciated.
This is fantastic. I do think we need to decide how this should work in general. We have a few platforms with somewhat different properties:
TDX: The guest decides, per memory access (using a GPA bit), whether an access is private or shared. In principle, the same address could be *both* and be distinguished by only that bit, and the two addresses would refer to different pages.
SEV: The guest decides, per memory access (using a GPA bit), whether an access is private or shared. At any given time, a physical address (with that bit masked off) can be private, shared, or invalid, but it can't be valid as private and shared at the same time.
pKVM (currently, as I understand it): the guest decides by hypercall, in advance of an access, which addresses are private and which are shared.
This series, if I understood it correctly, is like TDX except with no hardware security.
Sean or Chao, do you have a clear sense of whether the current fd-based private memory proposal can cleanly support SEV and pKVM? What, if anything, needs to be done on the API side to get that working well? I don't think we need to support SEV or pKVM right away to get this merged, but I do think we should understand how the API can map to them.
Powered by blists - more mailing lists