[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ad77da83-0e6e-47a1-abe7-8cfdfce8b254@linux.intel.com>
Date: Wed, 28 May 2025 16:55:05 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: Ackerley Tng <ackerleytng@...gle.com>
Cc: kvm@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
x86@...nel.org, linux-fsdevel@...r.kernel.org, aik@....com,
ajones@...tanamicro.com, akpm@...ux-foundation.org, amoorthy@...gle.com,
anthony.yznaga@...cle.com, anup@...infault.org, aou@...s.berkeley.edu,
bfoster@...hat.com, brauner@...nel.org, catalin.marinas@....com,
chao.p.peng@...el.com, chenhuacai@...nel.org, dave.hansen@...el.com,
david@...hat.com, dmatlack@...gle.com, dwmw@...zon.co.uk,
erdemaktas@...gle.com, fan.du@...el.com, fvdl@...gle.com, graf@...zon.com,
haibo1.xu@...el.com, hch@...radead.org, hughd@...gle.com,
ira.weiny@...el.com, isaku.yamahata@...el.com, jack@...e.cz,
james.morse@....com, jarkko@...nel.org, jgg@...pe.ca, jgowans@...zon.com,
jhubbard@...dia.com, jroedel@...e.de, jthoughton@...gle.com,
jun.miao@...el.com, kai.huang@...el.com, keirf@...gle.com,
kent.overstreet@...ux.dev, kirill.shutemov@...el.com,
liam.merwick@...cle.com, maciej.wieczor-retman@...el.com,
mail@...iej.szmigiero.name, maz@...nel.org, mic@...ikod.net,
michael.roth@....com, mpe@...erman.id.au, muchun.song@...ux.dev,
nikunj@....com, nsaenz@...zon.es, oliver.upton@...ux.dev,
palmer@...belt.com, pankaj.gupta@....com, paul.walmsley@...ive.com,
pbonzini@...hat.com, pdurrant@...zon.co.uk, peterx@...hat.com,
pgonda@...gle.com, pvorel@...e.cz, qperret@...gle.com,
quic_cvanscha@...cinc.com, quic_eberman@...cinc.com,
quic_mnalajal@...cinc.com, quic_pderrin@...cinc.com,
quic_pheragu@...cinc.com, quic_svaddagi@...cinc.com, quic_tsoni@...cinc.com,
richard.weiyang@...il.com, rick.p.edgecombe@...el.com, rientjes@...gle.com,
roypat@...zon.co.uk, rppt@...nel.org, seanjc@...gle.com, shuah@...nel.org,
steven.price@....com, steven.sistare@...cle.com, suzuki.poulose@....com,
tabba@...gle.com, thomas.lendacky@....com, usama.arif@...edance.com,
vannapurve@...gle.com, vbabka@...e.cz, viro@...iv.linux.org.uk,
vkuznets@...hat.com, wei.w.wang@...el.com, will@...nel.org,
willy@...radead.org, xiaoyao.li@...el.com, yan.y.zhao@...el.com,
yilun.xu@...el.com, yuzenghui@...wei.com, zhiquan1.li@...el.com
Subject: Re: [RFC PATCH v2 16/51] mm: hugetlb: Consolidate interpretation of
gbl_chg within alloc_hugetlb_folio()
On 5/15/2025 7:41 AM, Ackerley Tng wrote:
> Previously, gbl_chg was passed from alloc_hugetlb_folio() into
> dequeue_hugetlb_folio_vma(), leaking the concept of gbl_chg into
> dequeue_hugetlb_folio_vma().
>
> This patch consolidates the interpretation of gbl_chg into
> alloc_hugetlb_folio(), also renaming dequeue_hugetlb_folio_vma() to
> dequeue_hugetlb_folio() so dequeue_hugetlb_folio() can just focus on
> dequeuing a folio.
>
> Change-Id: I31bf48af2400b6e13b44d03c8be22ce1a9092a9c
> Signed-off-by: Ackerley Tng <ackerleytng@...gle.com>
> ---
> mm/hugetlb.c | 28 +++++++++++-----------------
> 1 file changed, 11 insertions(+), 17 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6ea1be71aa42..b843e869496f 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1364,9 +1364,9 @@ static unsigned long available_huge_pages(struct hstate *h)
> return h->free_huge_pages - h->resv_huge_pages;
> }
>
> -static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
> - struct vm_area_struct *vma,
> - unsigned long address, long gbl_chg)
> +static struct folio *dequeue_hugetlb_folio(struct hstate *h,
> + struct vm_area_struct *vma,
> + unsigned long address)
The rename seems not needed in this patch, since the function still takes vma
and uses it. May be better to move the rename to a later patch.
> {
> struct folio *folio = NULL;
> struct mempolicy *mpol;
> @@ -1374,13 +1374,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
> nodemask_t *nodemask;
> int nid;
>
> - /*
> - * gbl_chg==1 means the allocation requires a new page that was not
> - * reserved before. Making sure there's at least one free page.
> - */
> - if (gbl_chg && !available_huge_pages(h))
> - goto err;
> -
> gfp_mask = htlb_alloc_mask(h);
> nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
>
> @@ -1398,9 +1391,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
>
> mpol_cond_put(mpol);
> return folio;
> -
> -err:
> - return NULL;
> }
>
> /*
> @@ -3074,12 +3064,16 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> goto out_uncharge_cgroup_reservation;
>
> spin_lock_irq(&hugetlb_lock);
> +
> /*
> - * glb_chg is passed to indicate whether or not a page must be taken
> - * from the global free pool (global change). gbl_chg == 0 indicates
> - * a reservation exists for the allocation.
> + * gbl_chg == 0 indicates a reservation exists for the allocation - so
> + * try dequeuing a page. If there are available_huge_pages(), try using
> + * them!
> */
> - folio = dequeue_hugetlb_folio_vma(h, vma, addr, gbl_chg);
> + folio = NULL;
> + if (!gbl_chg || available_huge_pages(h))
> + folio = dequeue_hugetlb_folio(h, vma, addr);
> +
> if (!folio) {
> spin_unlock_irq(&hugetlb_lock);
> folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr);
Powered by blists - more mailing lists