[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <683ba0fe64dd5_13031529421@iweiny-mobl.notmuch>
Date: Sat, 31 May 2025 19:38:22 -0500
From: Ira Weiny <ira.weiny@...el.com>
To: Ackerley Tng <ackerleytng@...gle.com>, <kvm@...r.kernel.org>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>, <x86@...nel.org>,
<linux-fsdevel@...r.kernel.org>
CC: <ackerleytng@...gle.com>, <aik@....com>, <ajones@...tanamicro.com>,
<akpm@...ux-foundation.org>, <amoorthy@...gle.com>,
<anthony.yznaga@...cle.com>, <anup@...infault.org>, <aou@...s.berkeley.edu>,
<bfoster@...hat.com>, <binbin.wu@...ux.intel.com>, <brauner@...nel.org>,
<catalin.marinas@....com>, <chao.p.peng@...el.com>, <chenhuacai@...nel.org>,
<dave.hansen@...el.com>, <david@...hat.com>, <dmatlack@...gle.com>,
<dwmw@...zon.co.uk>, <erdemaktas@...gle.com>, <fan.du@...el.com>,
<fvdl@...gle.com>, <graf@...zon.com>, <haibo1.xu@...el.com>,
<hch@...radead.org>, <hughd@...gle.com>, <ira.weiny@...el.com>,
<isaku.yamahata@...el.com>, <jack@...e.cz>, <james.morse@....com>,
<jarkko@...nel.org>, <jgg@...pe.ca>, <jgowans@...zon.com>,
<jhubbard@...dia.com>, <jroedel@...e.de>, <jthoughton@...gle.com>,
<jun.miao@...el.com>, <kai.huang@...el.com>, <keirf@...gle.com>,
<kent.overstreet@...ux.dev>, <kirill.shutemov@...el.com>,
<liam.merwick@...cle.com>, <maciej.wieczor-retman@...el.com>,
<mail@...iej.szmigiero.name>, <maz@...nel.org>, <mic@...ikod.net>,
<michael.roth@....com>, <mpe@...erman.id.au>, <muchun.song@...ux.dev>,
<nikunj@....com>, <nsaenz@...zon.es>, <oliver.upton@...ux.dev>,
<palmer@...belt.com>, <pankaj.gupta@....com>, <paul.walmsley@...ive.com>,
<pbonzini@...hat.com>, <pdurrant@...zon.co.uk>, <peterx@...hat.com>,
<pgonda@...gle.com>, <pvorel@...e.cz>, <qperret@...gle.com>,
<quic_cvanscha@...cinc.com>, <quic_eberman@...cinc.com>,
<quic_mnalajal@...cinc.com>, <quic_pderrin@...cinc.com>,
<quic_pheragu@...cinc.com>, <quic_svaddagi@...cinc.com>,
<quic_tsoni@...cinc.com>, <richard.weiyang@...il.com>,
<rick.p.edgecombe@...el.com>, <rientjes@...gle.com>, <roypat@...zon.co.uk>,
<rppt@...nel.org>, <seanjc@...gle.com>, <shuah@...nel.org>,
<steven.price@....com>, <steven.sistare@...cle.com>,
<suzuki.poulose@....com>, <tabba@...gle.com>, <thomas.lendacky@....com>,
<usama.arif@...edance.com>, <vannapurve@...gle.com>, <vbabka@...e.cz>,
<viro@...iv.linux.org.uk>, <vkuznets@...hat.com>, <wei.w.wang@...el.com>,
<will@...nel.org>, <willy@...radead.org>, <xiaoyao.li@...el.com>,
<yan.y.zhao@...el.com>, <yilun.xu@...el.com>, <yuzenghui@...wei.com>,
<zhiquan1.li@...el.com>
Subject: Re: [RFC PATCH v2 23/51] mm: hugetlb: Refactor out
hugetlb_alloc_folio()
Ackerley Tng wrote:
> Refactor out hugetlb_alloc_folio() from alloc_hugetlb_folio(), which
> handles allocation of a folio and cgroup charging.
>
> Other than flags to control charging in the allocation process,
> hugetlb_alloc_folio() also has parameters for memory policy.
>
> This refactoring as a whole decouples the hugetlb page allocation from
> hugetlbfs, (1) where the subpool is stored at the fs mount, (2)
> reservations are made during mmap and stored in the vma, and (3) mpol
> must be stored at vma->vm_policy (4) a vma must be used for allocation
> even if the pages are not meant to be used by host process.
>
> This decoupling will allow hugetlb_alloc_folio() to be used by
> guest_memfd in later patches. In guest_memfd, (1) a subpool is created
> per-fd and is stored on the inode, (2) no vma-related reservations are
> used (3) mpol may not be associated with a vma since (4) for private
> pages, the pages will not be mappable to userspace and hence have to
> associated vmas.
>
> This could hopefully also open hugetlb up as a more generic source of
> hugetlb pages that are not bound to hugetlbfs, with the complexities
> of userspace/mmap/vma-related reservations contained just to
> hugetlbfs.
>
> Signed-off-by: Ackerley Tng <ackerleytng@...gle.com>
> Change-Id: I60528f246341268acbf0ed5de7752ae2cacbef93
> ---
> include/linux/hugetlb.h | 12 +++
> mm/hugetlb.c | 192 ++++++++++++++++++++++------------------
> 2 files changed, 118 insertions(+), 86 deletions(-)
>
[snip]
>
> +/**
> + * hugetlb_alloc_folio() - Allocates a hugetlb folio.
> + *
> + * @h: struct hstate to allocate from.
> + * @mpol: struct mempolicy to apply for this folio allocation.
> + * @ilx: Interleave index for interpretation of @mpol.
> + * @charge_cgroup_rsvd: Set to true to charge cgroup reservation.
> + * @use_existing_reservation: Set to true if this allocation should use an
> + * existing hstate reservation.
> + *
> + * This function handles cgroup and global hstate reservations. VMA-related
> + * reservations and subpool debiting must be handled by the caller if necessary.
> + *
> + * Return: folio on success or negated error otherwise.
> + */
> +struct folio *hugetlb_alloc_folio(struct hstate *h, struct mempolicy *mpol,
> + pgoff_t ilx, bool charge_cgroup_rsvd,
> + bool use_existing_reservation)
> +{
> + unsigned int nr_pages = pages_per_huge_page(h);
> + struct hugetlb_cgroup *h_cg = NULL;
> + struct folio *folio = NULL;
> + nodemask_t *nodemask;
> + gfp_t gfp_mask;
> + int nid;
> + int idx;
> + int ret;
> +
> + idx = hstate_index(h);
> +
> + if (charge_cgroup_rsvd) {
> + if (hugetlb_cgroup_charge_cgroup_rsvd(idx, nr_pages, &h_cg))
> + goto out;
Why not just return here?
return ERR_PTR(-ENOSPC);
> + }
> +
> + if (hugetlb_cgroup_charge_cgroup(idx, nr_pages, &h_cg))
> + goto out_uncharge_cgroup_reservation;
> +
> + gfp_mask = htlb_alloc_mask(h);
> + nid = policy_node_nodemask(mpol, gfp_mask, ilx, &nodemask);
> +
> + spin_lock_irq(&hugetlb_lock);
> +
> + if (use_existing_reservation || available_huge_pages(h))
> + folio = dequeue_hugetlb_folio(h, gfp_mask, mpol, nid, nodemask);
> +
> + if (!folio) {
> + spin_unlock_irq(&hugetlb_lock);
> + folio = alloc_surplus_hugetlb_folio(h, gfp_mask, mpol, nid, nodemask);
> + if (!folio)
> + goto out_uncharge_cgroup;
> + spin_lock_irq(&hugetlb_lock);
> + list_add(&folio->lru, &h->hugepage_activelist);
> + folio_ref_unfreeze(folio, 1);
> + /* Fall through */
> + }
> +
> + if (use_existing_reservation) {
> + folio_set_hugetlb_restore_reserve(folio);
> + h->resv_huge_pages--;
> + }
> +
> + hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, folio);
> +
> + if (charge_cgroup_rsvd)
> + hugetlb_cgroup_commit_charge_rsvd(idx, nr_pages, h_cg, folio);
> +
> + spin_unlock_irq(&hugetlb_lock);
> +
> + gfp_mask = htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL;
> + ret = mem_cgroup_charge_hugetlb(folio, gfp_mask);
> + /*
> + * Unconditionally increment NR_HUGETLB here. If it turns out that
> + * mem_cgroup_charge_hugetlb failed, then immediately free the page and
> + * decrement NR_HUGETLB.
> + */
> + lruvec_stat_mod_folio(folio, NR_HUGETLB, pages_per_huge_page(h));
> +
> + if (ret == -ENOMEM) {
> + free_huge_folio(folio);
> + return ERR_PTR(-ENOMEM);
> + }
> +
> + return folio;
> +
> +out_uncharge_cgroup:
> + hugetlb_cgroup_uncharge_cgroup(idx, nr_pages, h_cg);
> +out_uncharge_cgroup_reservation:
> + if (charge_cgroup_rsvd)
> + hugetlb_cgroup_uncharge_cgroup_rsvd(idx, nr_pages, h_cg);
I find the direct copy of the unwind logic from alloc_hugetlb_folio()
cumbersome and it seems like a good opportunity to clean it up.
> +out:
> + folio = ERR_PTR(-ENOSPC);
> + goto out;
Endless loop?
Ira
[snip]
Powered by blists - more mailing lists