[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cbdec587dec5ee10de4e4596d158c871e9630cac.1747264138.git.ackerleytng@google.com>
Date: Wed, 14 May 2025 16:41:56 -0700
From: Ackerley Tng <ackerleytng@...gle.com>
To: kvm@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
x86@...nel.org, linux-fsdevel@...r.kernel.org
Cc: ackerleytng@...gle.com, aik@....com, ajones@...tanamicro.com,
akpm@...ux-foundation.org, amoorthy@...gle.com, anthony.yznaga@...cle.com,
anup@...infault.org, aou@...s.berkeley.edu, bfoster@...hat.com,
binbin.wu@...ux.intel.com, brauner@...nel.org, catalin.marinas@....com,
chao.p.peng@...el.com, chenhuacai@...nel.org, dave.hansen@...el.com,
david@...hat.com, dmatlack@...gle.com, dwmw@...zon.co.uk,
erdemaktas@...gle.com, fan.du@...el.com, fvdl@...gle.com, graf@...zon.com,
haibo1.xu@...el.com, hch@...radead.org, hughd@...gle.com, ira.weiny@...el.com,
isaku.yamahata@...el.com, jack@...e.cz, james.morse@....com,
jarkko@...nel.org, jgg@...pe.ca, jgowans@...zon.com, jhubbard@...dia.com,
jroedel@...e.de, jthoughton@...gle.com, jun.miao@...el.com,
kai.huang@...el.com, keirf@...gle.com, kent.overstreet@...ux.dev,
kirill.shutemov@...el.com, liam.merwick@...cle.com,
maciej.wieczor-retman@...el.com, mail@...iej.szmigiero.name, maz@...nel.org,
mic@...ikod.net, michael.roth@....com, mpe@...erman.id.au,
muchun.song@...ux.dev, nikunj@....com, nsaenz@...zon.es,
oliver.upton@...ux.dev, palmer@...belt.com, pankaj.gupta@....com,
paul.walmsley@...ive.com, pbonzini@...hat.com, pdurrant@...zon.co.uk,
peterx@...hat.com, pgonda@...gle.com, pvorel@...e.cz, qperret@...gle.com,
quic_cvanscha@...cinc.com, quic_eberman@...cinc.com,
quic_mnalajal@...cinc.com, quic_pderrin@...cinc.com, quic_pheragu@...cinc.com,
quic_svaddagi@...cinc.com, quic_tsoni@...cinc.com, richard.weiyang@...il.com,
rick.p.edgecombe@...el.com, rientjes@...gle.com, roypat@...zon.co.uk,
rppt@...nel.org, seanjc@...gle.com, shuah@...nel.org, steven.price@....com,
steven.sistare@...cle.com, suzuki.poulose@....com, tabba@...gle.com,
thomas.lendacky@....com, usama.arif@...edance.com, vannapurve@...gle.com,
vbabka@...e.cz, viro@...iv.linux.org.uk, vkuznets@...hat.com,
wei.w.wang@...el.com, will@...nel.org, willy@...radead.org,
xiaoyao.li@...el.com, yan.y.zhao@...el.com, yilun.xu@...el.com,
yuzenghui@...wei.com, zhiquan1.li@...el.com
Subject: [RFC PATCH v2 17/51] mm: hugetlb: Cleanup interpretation of gbl_chg
in alloc_hugetlb_folio()
The comment before dequeuing a folio explains that if gbl_chg == 0, a
reservation exists for the allocation.
In addition, if a vma reservation exists, there's no need to get a
reservation from the subpool, and gbl_chg was set to 0.
This patch replaces both of that with code: subpool_reservation_exists
defaults to false, and if a vma reservation does not exist, a
reservation is sought from the subpool.
Then, the existence of a reservation, whether in the vma or subpool,
is summarized into reservation_exists, which is then used to determine
whether to dequeue a folio.
Signed-off-by: Ackerley Tng <ackerleytng@...gle.com>
Change-Id: I52130a0bf9f33e07d320a446cdb3ebfddd9de658
---
mm/hugetlb.c | 28 ++++++++++++----------------
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b843e869496f..597f2b9f62b5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2999,8 +2999,10 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
{
struct hugepage_subpool *spool = subpool_vma(vma);
struct hstate *h = hstate_vma(vma);
+ bool subpool_reservation_exists;
+ bool reservation_exists;
struct folio *folio;
- long retval, gbl_chg;
+ long retval;
map_chg_state map_chg;
int ret, idx;
struct hugetlb_cgroup *h_cg = NULL;
@@ -3036,17 +3038,16 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
* that the allocation will not exceed the subpool limit.
* Or if it can get one from the pool reservation directly.
*/
+ subpool_reservation_exists = false;
if (map_chg) {
- gbl_chg = hugepage_subpool_get_pages(spool, 1);
- if (gbl_chg < 0)
+ int npages_req = hugepage_subpool_get_pages(spool, 1);
+
+ if (npages_req < 0)
goto out_end_reservation;
- } else {
- /*
- * If we have the vma reservation ready, no need for extra
- * global reservation.
- */
- gbl_chg = 0;
+
+ subpool_reservation_exists = npages_req == 0;
}
+ reservation_exists = !map_chg || subpool_reservation_exists;
/*
* If this allocation is not consuming a per-vma reservation,
@@ -3065,13 +3066,8 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
spin_lock_irq(&hugetlb_lock);
- /*
- * gbl_chg == 0 indicates a reservation exists for the allocation - so
- * try dequeuing a page. If there are available_huge_pages(), try using
- * them!
- */
folio = NULL;
- if (!gbl_chg || available_huge_pages(h))
+ if (reservation_exists || available_huge_pages(h))
folio = dequeue_hugetlb_folio(h, vma, addr);
if (!folio) {
@@ -3089,7 +3085,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
* Either dequeued or buddy-allocated folio needs to add special
* mark to the folio when it consumes a global reservation.
*/
- if (!gbl_chg) {
+ if (reservation_exists) {
folio_set_hugetlb_restore_reserve(folio);
h->resv_huge_pages--;
}
--
2.49.0.1045.g170613ef41-goog
Powered by blists - more mailing lists