[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250613005400.3694904-4-michael.roth@amd.com>
Date: Thu, 12 Jun 2025 19:53:58 -0500
From: Michael Roth <michael.roth@....com>
To: <kvm@...r.kernel.org>
CC: <linux-coco@...ts.linux.dev>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <david@...hat.com>, <tabba@...gle.com>,
<vannapurve@...gle.com>, <ackerleytng@...gle.com>, <ira.weiny@...el.com>,
<thomas.lendacky@....com>, <pbonzini@...hat.com>, <seanjc@...gle.com>,
<vbabka@...e.cz>, <joro@...tes.org>, <pratikrajesh.sampat@....com>,
<liam.merwick@...cle.com>, <yan.y.zhao@...el.com>, <aik@....com>
Subject: [PATCH RFC v1 3/5] KVM: guest_memfd: Call arch invalidation hooks when converting to shared
When guest_memfd is used for both shared/private memory, converting
pages to shared may require kvm_arch_gmem_invalidate() to be issued to
return the pages to an architecturally-defined "shared" state if the
pages were previously allocated and transitioned to a private state via
kvm_arch_gmem_prepare().
Handle this by issuing the appropriate kvm_arch_gmem_invalidate() calls
when converting ranges in the filemap to a shared state.
Signed-off-by: Michael Roth <michael.roth@....com>
---
virt/kvm/guest_memfd.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index b77cdccd340e..f27e1f3962bb 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -203,6 +203,28 @@ static int kvm_gmem_shareability_apply(struct inode *inode,
struct maple_tree *mt;
mt = &kvm_gmem_private(inode)->shareability;
+
+ /*
+ * If a folio has been allocated then it was possibly in a private
+ * state prior to conversion. Ensure arch invalidations are issued
+ * to return the folio to a normal/shared state as defined by the
+ * architecture before tracking it as shared in gmem.
+ */
+ if (m == SHAREABILITY_ALL) {
+ pgoff_t idx;
+
+ for (idx = work->start; idx < work->start + work->nr_pages; idx++) {
+ struct folio *folio = filemap_lock_folio(inode->i_mapping, idx);
+
+ if (!IS_ERR(folio)) {
+ kvm_arch_gmem_invalidate(folio_pfn(folio),
+ folio_pfn(folio) + folio_nr_pages(folio));
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+ }
+ }
+
return kvm_gmem_shareability_store(mt, work->start, work->nr_pages, m);
}
--
2.25.1
Powered by blists - more mailing lists