[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <29c347bde68ec027259654e8e85371307edf7058.1770148108.git.ackerleytng@google.com>
Date: Tue, 3 Feb 2026 14:50:57 -0800
From: Ackerley Tng <ackerleytng@...gle.com>
To: syzbot+33a04338019ac7e43a44@...kaller.appspotmail.com
Cc: kartikey406@...il.com, linux-kernel@...r.kernel.org,
syzkaller-bugs@...glegroups.com, Ackerley Tng <ackerleytng@...gle.com>
Subject: [PATCH 1/2] KVM: guest_memfd: Always use order 0 when allocating for guest_memfd
#syz test: git://git.kernel.org/pub/scm/virt/kvm/kvm.git next
filemap_{grab,get}_folio() and related functions, used since the early
stages of guest_memfd have determined the order of the folio to be
allocated by looking up mapping_min_folio_order(mapping). As identified by
syzbot, MADV_HUGEPAGE can be used to set the result of
mapping_min_folio_order() to a value greater than 0, leading to the
allocation of a huge page and subsequent WARNing.
Refactor the allocation code of guest_memfd to directly use
filemap_add_folio(), specifying an order of 0.
This refactoring replaces the original functionality where FGP_LOCK and
FGP_CREAT are requested. Opportunistically drop functionality provided by
FGP_ACCESSED. guest_memfd folios don't care about accessed flags because
guest_memfd memory is unevictable and there is no storage to write back to.
Reported-by: syzbot+33a04338019ac7e43a44@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=33a04338019ac7e43a44
Tested-by: syzbot+33a04338019ac7e43a44@...kaller.appspotmail.com
Signed-off-by: Ackerley Tng <ackerleytng@...gle.com>
---
virt/kvm/guest_memfd.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index fdaea3422c30..0c58f6aa5609 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -135,23 +135,35 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
/* TODO: Support huge pages. */
struct mempolicy *policy;
struct folio *folio;
+ gfp_t gfp;
+ int ret;
/*
* Fast-path: See if folio is already present in mapping to avoid
* policy_lookup.
*/
+repeat:
folio = __filemap_get_folio(inode->i_mapping, index,
FGP_LOCK | FGP_ACCESSED, 0);
if (!IS_ERR(folio))
return folio;
+ gfp = mapping_gfp_mask(inode->i_mapping);
+
policy = mpol_shared_policy_lookup(&GMEM_I(inode)->policy, index);
- folio = __filemap_get_folio_mpol(inode->i_mapping, index,
- FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
- mapping_gfp_mask(inode->i_mapping), policy);
+ folio = filemap_alloc_folio(gfp, 0, policy);
mpol_cond_put(policy);
+ if (!folio)
+ return ERR_PTR(-ENOMEM);
- return folio;
+ ret = filemap_add_folio(inode->i_mapping, folio, index, gfp);
+ if (ret)
+ folio_put(folio);
+
+ if (ret == -EEXIST)
+ goto repeat;
+
+ return ret ? ERR_PTR(ret) : folio;
}
static enum kvm_gfn_range_filter kvm_gmem_get_invalidate_filter(struct inode *inode)
--
2.53.0.rc2.204.g2597b5adb4-goog
Powered by blists - more mailing lists