[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241113-guestmem-library-v3-1-71fdee85676b@quicinc.com>
Date: Wed, 13 Nov 2024 14:34:36 -0800
From: Elliot Berman <quic_eberman@...cinc.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Andrew Morton
<akpm@...ux-foundation.org>,
Sean Christopherson <seanjc@...gle.com>,
"Fuad
Tabba" <tabba@...gle.com>,
Ackerley Tng <ackerleytng@...gle.com>,
"Mike
Rapoport" <rppt@...nel.org>, "H. Peter Anvin" <hpa@...or.com>
CC: James Gowans <jgowans@...zon.com>, <linux-fsdevel@...r.kernel.org>,
<kvm@...r.kernel.org>, <linux-coco@...ts.linux.dev>,
<linux-arm-msm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, Elliot Berman <quic_eberman@...cinc.com>
Subject: [PATCH RFC v3 1/2] KVM: guest_memfd: Convert .free_folio() to
.release_folio()
When guest_memfd becomes a library, a callback will need to be made to
the owner (KVM SEV) to transition pages back to hypervisor-owned/shared
state. This is currently being done as part of .free_folio() address
space op, but this callback shouldn't assume that the mapping still
exists. guest_memfd library will need the mapping to still exist to look
up its operations table.
.release_folio() and .invalidate_folio() address space ops can serve the
same purpose here. The key difference between release_folio() and
free_folio() is whether the mapping is still valid at time of the
callback. This approach was discussed in the link in the footer, but not
taken because free_folio() was easier to implement.
Link: https://lore.kernel.org/kvm/20231016115028.996656-1-michael.roth@amd.com/
Signed-off-by: Elliot Berman <quic_eberman@...cinc.com>
---
virt/kvm/guest_memfd.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 47a9f68f7b247f4cba0c958b4c7cd9458e7c46b4..13f83ad8a4c26ba82aca4f2684f22044abb4bc19 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -358,22 +358,35 @@ static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *fol
}
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE
-static void kvm_gmem_free_folio(struct folio *folio)
+static bool kvm_gmem_release_folio(struct folio *folio, gfp_t gfp)
{
struct page *page = folio_page(folio, 0);
kvm_pfn_t pfn = page_to_pfn(page);
int order = folio_order(folio);
kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
+
+ return true;
+}
+
+static void kvm_gmem_invalidate_folio(struct folio *folio, size_t offset,
+ size_t len)
+{
+ WARN_ON_ONCE(offset != 0);
+ WARN_ON_ONCE(len != folio_size(folio));
+
+ if (offset == 0 && len == folio_size(folio))
+ filemap_release_folio(folio, 0);
}
#endif
static const struct address_space_operations kvm_gmem_aops = {
.dirty_folio = noop_dirty_folio,
- .migrate_folio = kvm_gmem_migrate_folio,
+ .migrate_folio = kvm_gmem_migrate_folio,
.error_remove_folio = kvm_gmem_error_folio,
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE
- .free_folio = kvm_gmem_free_folio,
+ .release_folio = kvm_gmem_release_folio,
+ .invalidate_folio = kvm_gmem_invalidate_folio,
#endif
};
--
2.34.1
Powered by blists - more mailing lists