[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73c62e76d83fe4e5990b640582da933ff3862cb1.camel@intel.com>
Date: Sat, 13 Jul 2024 01:28:34 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>
CC: "seanjc@...gle.com" <seanjc@...gle.com>, "michael.roth@....com"
<michael.roth@....com>
Subject: Re: [PATCH 09/12] KVM: guest_memfd: move check for already-populated
page to common code
On Thu, 2024-07-11 at 18:27 -0400, Paolo Bonzini wrote:
> Do not allow populating the same page twice with startup data. In the
> case of SEV-SNP, for example, the firmware does not allow it anyway,
> since the launch-update operation is only possible on pages that are
> still shared in the RMP.
>
> Even if it worked, kvm_gmem_populate()'s callback is meant to have side
> effects such as updating launch measurements, and updating the same
> page twice is unlikely to have the desired results.
>
> Races between calls to the ioctl are not possible because kvm_gmem_populate()
> holds slots_lock and the VM should not be running. But again, even if
> this worked on other confidential computing technology, it doesn't matter
> to guest_memfd.c whether this is an intentional attempt to do something
> fishy, or missing synchronization in userspace, or even something
> intentional. One of the racers wins, and the page is initialized by
> either kvm_gmem_prepare_folio() or kvm_gmem_populate().
>
> Anyway, out of paranoia, adjust sev_gmem_post_populate() anyway to use
> the same errno that kvm_gmem_populate() is using.
This patch breaks our rebased TDX development tree. First
kvm_gmem_prepare_folio() is called during the KVM_PRE_FAULT_MEMORY operation,
then next kvm_gmem_populate() is called during the KVM_TDX_INIT_MEM_REGION ioctl
to actually populate the memory, which hits the new -EEXIST error path.
Given we are not actually populating during KVM_PRE_FAULT_MEMORY and try to
avoid booting a TD until we've done so, maybe setting folio_mark_uptodate() in
kvm_gmem_prepare_folio() is not appropriate in that case? But it may not be easy
to separate.
Powered by blists - more mailing lists