[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20260203020913.100838-1-kartikey406@gmail.com>
Date: Tue, 3 Feb 2026 07:39:13 +0530
From: Deepanshu Kartikey <kartikey406@...il.com>
To: pbonzini@...hat.com
Cc: kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Deepanshu Kartikey <kartikey406@...il.com>,
syzbot+33a04338019ac7e43a44@...kaller.appspotmail.com,
Deepanshu Kartikey <Kartikey406@...il.com>
Subject: [PATCH] KVM: guest_memfd: Reject large folios until support is implemented
Large folios are not yet supported in guest_memfd (see TODO comment
in kvm_gmem_get_folio()), but can still be allocated if userspace
uses madvise(MADV_HUGEPAGE), which overrides the folio order
restrictions set by mapping_set_folio_order_range().
When a large folio is allocated, it triggers WARN_ON_ONCE() at line
416 in kvm_gmem_fault_user_mapping(), causing a kernel panic if
panic_on_warn is enabled.
Add mapping_set_folio_order_range(0, 0) as defense in depth, and
actively check for large folios in kvm_gmem_get_folio() on both
the fast-path (existing folio) and slow-path (newly created folio).
If a large folio is found, unlock it, drop the reference, and return
-E2BIG to prevent the WARNING from triggering.
This avoids kernel panics when panic_on_warn is enabled.
Reported-by: syzbot+33a04338019ac7e43a44@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=33a04338019ac7e43a44
Fixes: b85524314a3d ("KVM: guest_memfd: delay kvm_gmem_prepare_folio() until the memory is passed to the guest")
Tested-by: syzbot+33a04338019ac7e43a44@...kaller.appspotmail.com
Signed-off-by: Deepanshu Kartikey <Kartikey406@...il.com>
---
virt/kvm/guest_memfd.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index fdaea3422c30..ee5bcf238f98 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -143,13 +143,29 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
folio = __filemap_get_folio(inode->i_mapping, index,
FGP_LOCK | FGP_ACCESSED, 0);
if (!IS_ERR(folio))
- return folio;
+ goto check_folio;
policy = mpol_shared_policy_lookup(&GMEM_I(inode)->policy, index);
folio = __filemap_get_folio_mpol(inode->i_mapping, index,
FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
mapping_gfp_mask(inode->i_mapping), policy);
mpol_cond_put(policy);
+ if (IS_ERR(folio))
+ return folio;
+check_folio:
+ /*
+ * Large folios are not supported yet. This can still happen
+ * despite mapping_set_folio_order_range() if userspace uses
+ * madvise(MADV_HUGEPAGE) which can override the folio order
+ * restrictions. Reject the large folio and remove it from
+ * the page cache so the next fault can allocate a order-0
+ * page instead.
+ */
+ if (folio_test_large(folio)) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return ERR_PTR(-E2BIG);
+ }
return folio;
}
@@ -596,6 +612,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
inode->i_mode |= S_IFREG;
inode->i_size = size;
mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+ mapping_set_folio_order_range(inode->i_mapping, 0, 0);
mapping_set_inaccessible(inode->i_mapping);
/* Unmovable mappings are supposed to be marked unevictable as well. */
WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
--
2.43.0
Powered by blists - more mailing lists