lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260106102250.25194-1-yan.y.zhao@intel.com>
Date: Tue,  6 Jan 2026 18:22:50 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: pbonzini@...hat.com,
	seanjc@...gle.com
Cc: linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org,
	x86@...nel.org,
	rick.p.edgecombe@...el.com,
	dave.hansen@...el.com,
	kas@...nel.org,
	tabba@...gle.com,
	ackerleytng@...gle.com,
	michael.roth@....com,
	david@...nel.org,
	vannapurve@...gle.com,
	sagis@...gle.com,
	vbabka@...e.cz,
	thomas.lendacky@....com,
	nik.borisov@...e.com,
	pgonda@...gle.com,
	fan.du@...el.com,
	jun.miao@...el.com,
	francescolavra.fl@...il.com,
	jgross@...e.com,
	ira.weiny@...el.com,
	isaku.yamahata@...el.com,
	xiaoyao.li@...el.com,
	kai.huang@...el.com,
	binbin.wu@...ux.intel.com,
	chao.p.peng@...el.com,
	chao.gao@...el.com,
	yan.y.zhao@...el.com
Subject: [PATCH v3 16/24] KVM: guest_memfd: Split for punch hole and private-to-shared conversion

In TDX, private page tables require precise zapping because faulting back
the zapped mappings necessitates guest re-acceptance. Therefore, before
performing a zap for hole punching and private-to-shared conversions, huge
leaves that cross the boundary of the zapping GFN range in the mirror page
table must be split.

Splitting may fail (usually due to out of memory). If this happens, hole
punching and private-to-shared conversion should bail out early and return
an error to userspace.

Splitting is not necessary for zapping shared mappings or zapping in
kvm_gmem_release()/kvm_gmem_error_folio(). The penalty of zapping more
shared mappings than necessary is minimal. All mappings are zapped in
kvm_gmem_release(). kvm_gmem_error_folio() zaps the entire folio range, and
KVM's basic assumption is that a huge mapping must have a single backend
folio.

Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
---
v3:
- Rebased to [2].
- Do not flush TLB for kvm_split_cross_boundary_leafs(), i.e., only flush
  TLB if zaps are performed.

[2] https://github.com/googleprodkernel/linux-cc/tree/wip-gmem-conversions-hugetlb-restructuring-12-08-25

RFC v2:
- Rebased to [1]. As changes in this patch are gmem specific, they may need
  to be updated if the implementation in [1] changes.
- Update kvm_split_boundary_leafs() to kvm_split_cross_boundary_leafs() and
  invoke it before kvm_gmem_punch_hole() and private-to-shared conversion.

[1] https://lore.kernel.org/all/cover.1747264138.git.ackerleytng@google.com/

RFC v1:
- new patch.
---
 virt/kvm/guest_memfd.c | 67 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 03613b791728..8e7fbed57a20 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -486,6 +486,55 @@ static int merge_truncate_range(struct inode *inode, pgoff_t start,
 	return ret;
 }
 
+static int __kvm_gmem_split_private(struct gmem_file *f, pgoff_t start, pgoff_t end)
+{
+	enum kvm_gfn_range_filter attr_filter = KVM_FILTER_PRIVATE;
+
+	bool locked = false;
+	struct kvm_memory_slot *slot;
+	struct kvm *kvm = f->kvm;
+	unsigned long index;
+	int ret = 0;
+
+	xa_for_each_range(&f->bindings, index, slot, start, end - 1) {
+		pgoff_t pgoff = slot->gmem.pgoff;
+		struct kvm_gfn_range gfn_range = {
+			.start = slot->base_gfn + max(pgoff, start) - pgoff,
+			.end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff,
+			.slot = slot,
+			.may_block = true,
+			.attr_filter = attr_filter,
+		};
+
+		if (!locked) {
+			KVM_MMU_LOCK(kvm);
+			locked = true;
+		}
+
+		ret = kvm_split_cross_boundary_leafs(kvm, &gfn_range, false);
+		if (ret)
+			break;
+	}
+
+	if (locked)
+		KVM_MMU_UNLOCK(kvm);
+
+	return ret;
+}
+
+static int kvm_gmem_split_private(struct inode *inode, pgoff_t start, pgoff_t end)
+{
+	struct gmem_file *f;
+	int r = 0;
+
+	kvm_gmem_for_each_file(f, inode->i_mapping) {
+		r = __kvm_gmem_split_private(f, start, end);
+		if (r)
+			break;
+	}
+	return r;
+}
+
 static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len)
 {
 	pgoff_t start = offset >> PAGE_SHIFT;
@@ -499,6 +548,13 @@ static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len)
 	filemap_invalidate_lock(inode->i_mapping);
 
 	kvm_gmem_invalidate_begin(inode, start, end);
+
+	ret = kvm_gmem_split_private(inode, start, end);
+	if (ret) {
+		kvm_gmem_invalidate_end(inode, start, end);
+		filemap_invalidate_unlock(inode->i_mapping);
+		return ret;
+	}
 	kvm_gmem_zap(inode, start, end);
 
 	ret = merge_truncate_range(inode, start, len >> PAGE_SHIFT, true);
@@ -907,6 +963,17 @@ static int kvm_gmem_convert(struct inode *inode, pgoff_t start,
 	invalidate_start = kvm_gmem_compute_invalidate_start(inode, start);
 	invalidate_end = kvm_gmem_compute_invalidate_end(inode, end);
 	kvm_gmem_invalidate_begin(inode, invalidate_start, invalidate_end);
+
+	if (!to_private) {
+		r = kvm_gmem_split_private(inode, start, end);
+		if (r) {
+			*err_index = start;
+			mas_destroy(&mas);
+			kvm_gmem_invalidate_end(inode, invalidate_start, invalidate_end);
+			return r;
+		}
+	}
+
 	kvm_gmem_zap(inode, start, end);
 	kvm_gmem_invalidate_end(inode, invalidate_start, invalidate_end);
 
-- 
2.43.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ