[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <2cb3217b-8af5-4349-b59f-ca4a3703a01a@www.fastmail.com>
Date: Fri, 12 Nov 2021 13:39:28 -0800
From: "Andy Lutomirski" <luto@...nel.org>
To: "Marc Orr" <marcorr@...gle.com>,
"Sean Christopherson" <seanjc@...gle.com>
Cc: "Borislav Petkov" <bp@...en8.de>,
"Dave Hansen" <dave.hansen@...el.com>,
"Peter Gonda" <pgonda@...gle.com>,
"Brijesh Singh" <brijesh.singh@....com>,
"the arch/x86 maintainers" <x86@...nel.org>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
"kvm list" <kvm@...r.kernel.org>, linux-coco@...ts.linux.dev,
linux-mm@...ck.org,
"Linux Crypto Mailing List" <linux-crypto@...r.kernel.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Ingo Molnar" <mingo@...hat.com>, "Joerg Roedel" <jroedel@...e.de>,
"Tom Lendacky" <Thomas.Lendacky@....com>,
"H. Peter Anvin" <hpa@...or.com>,
"Ard Biesheuvel" <ardb@...nel.org>,
"Paolo Bonzini" <pbonzini@...hat.com>,
"Vitaly Kuznetsov" <vkuznets@...hat.com>,
"Wanpeng Li" <wanpengli@...cent.com>,
"Jim Mattson" <jmattson@...gle.com>,
"Dave Hansen" <dave.hansen@...ux.intel.com>,
"Sergio Lopez" <slp@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Srinivas Pandruvada" <srinivas.pandruvada@...ux.intel.com>,
"David Rientjes" <rientjes@...gle.com>,
"Dov Murik" <dovmurik@...ux.ibm.com>,
"Tobin Feldman-Fitzthum" <tobin@....com>,
"Michael Roth" <Michael.Roth@....com>,
"Vlastimil Babka" <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill@...temov.name>,
"Andi Kleen" <ak@...ux.intel.com>,
"Tony Luck" <tony.luck@...el.com>,
"Sathyanarayanan Kuppuswamy"
<sathyanarayanan.kuppuswamy@...ux.intel.com>
Subject: Re: [PATCH Part2 v5 00/45] Add AMD Secure Nested Paging (SEV-SNP) Hypervisor
Support
On Fri, Nov 12, 2021, at 1:30 PM, Marc Orr wrote:
> On Fri, Nov 12, 2021 at 12:38 PM Sean Christopherson <seanjc@...gle.com> wrote:
>>
>> On Fri, Nov 12, 2021, Borislav Petkov wrote:
>> > On Fri, Nov 12, 2021 at 07:48:17PM +0000, Sean Christopherson wrote:
>> > > Yes, but IMO inducing a fault in the guest because of _host_ bug is wrong.
>> >
>> > What do you suggest instead?
>>
>> Let userspace decide what is mapped shared and what is mapped private. The kernel
>> and KVM provide the APIs/infrastructure to do the actual conversions in a thread-safe
>> fashion and also to enforce the current state, but userspace is the control plane.
>>
>> It would require non-trivial changes in userspace if there are multiple processes
>> accessing guest memory, e.g. Peter's networking daemon example, but it _is_ fully
>> solvable. The exit to userspace means all three components (guest, kernel,
>> and userspace) have full knowledge of what is shared and what is private. There
>> is zero ambiguity:
>>
>> - if userspace accesses guest private memory, it gets SIGSEGV or whatever.
>> - if kernel accesses guest private memory, it does BUG/panic/oops[*]
>> - if guest accesses memory with the incorrect C/SHARED-bit, it gets killed.
>>
>> This is the direction KVM TDX support is headed, though it's obviously still a WIP.
>>
>> And ideally, to avoid implicit conversions at any level, hardware vendors' ABIs
>> define that:
>>
>> a) All convertible memory, i.e. RAM, starts as private.
>> b) Conversions between private and shared must be done via explicit hypercall.
>>
>> Without (b), userspace and thus KVM have to treat guest accesses to the incorrect
>> type as implicit conversions.
>>
>> [*] Sadly, fully preventing kernel access to guest private is not possible with
>> TDX, especially if the direct map is left intact. But maybe in the future
>> TDX will signal a fault instead of poisoning memory and leaving a #MC mine.
>
> In this proposal, consider a guest driver instructing a device to DMA
> write a 1 GB memory buffer. A well-behaved guest driver will ensure
> that the entire 1 GB is marked shared. But what about a malicious or
> buggy guest? Let's assume a bad guest driver instructs the device to
> write guest private memory.
>
> So now, the virtual device, which might be implemented as some host
> side process, needs to (1) check and lock all 4k constituent RMP
> entries (so they're not converted to private while the DMA write is
> taking palce), (2) write the 1 GB buffer, and (3) unlock all 4 k
> constituent RMP entries? If I'm understanding this correctly, then the
> synchronization will be prohibitively expensive.
Let's consider a very very similar scenario: consider a guest driver setting up a 1 GB DMA buffer. The virtual device, implemented as host process, needs to (1) map (and thus lock *or* be prepared for faults) in 1GB / 4k pages of guest memory (so they're not *freed* while the DMA write is taking place), (2) write the buffer, and (3) unlock all the pages. Or it can lock them at setup time and keep them locked for a long time if that's appropriate.
Sure, the locking is expensive, but it's nonnegotiable. The RMP issue is just a special case of the more general issue that the host MUST NOT ACCESS GUEST MEMORY AFTER IT'S FREED.
--Andy
Powered by blists - more mailing lists