[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6cdc1de1-f760-4084-8eae-102ca16fee20@arm.com>
Date: Wed, 19 Feb 2025 15:29:44 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Nico Pache <npache@...hat.com>, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: anshuman.khandual@....com, catalin.marinas@....com, cl@...two.org,
vbabka@...e.cz, mhocko@...e.com, apopple@...dia.com,
dave.hansen@...ux.intel.com, will@...nel.org, baohua@...nel.org,
jack@...e.cz, srivatsa@...il.mit.edu, haowenchao22@...il.com,
hughd@...gle.com, aneesh.kumar@...nel.org, yang@...amperecomputing.com,
peterx@...hat.com, ioworker0@...il.com, wangkefeng.wang@...wei.com,
ziy@...dia.com, jglisse@...gle.com, surenb@...gle.com,
vishal.moola@...il.com, zokeefe@...gle.com, zhengqi.arch@...edance.com,
jhubbard@...dia.com, 21cnbao@...il.com, willy@...radead.org,
kirill.shutemov@...ux.intel.com, david@...hat.com, aarcange@...hat.com,
raquini@...hat.com, dev.jain@....com, sunnanyong@...wei.com,
usamaarif642@...il.com, audra@...hat.com, akpm@...ux-foundation.org,
rostedt@...dmis.org, mathieu.desnoyers@...icios.com, tiwai@...e.de
Subject: Re: [RFC v2 4/9] khugepaged: generalize alloc_charge_folio for mTHP
support
On 11/02/2025 00:30, Nico Pache wrote:
> alloc_charge_folio allocates the new folio for the khugepaged collapse.
> Generalize the order of the folio allocations to support future mTHP
> collapsing.
>
> No functional changes in this patch.
>
> Signed-off-by: Nico Pache <npache@...hat.com>
> ---
> mm/khugepaged.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index c834ea842847..0cfcdc11cabd 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1074,14 +1074,14 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> }
>
> static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
> - struct collapse_control *cc)
> + struct collapse_control *cc, int order)
> {
> gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> GFP_TRANSHUGE);
> int node = khugepaged_find_target_node(cc);
> struct folio *folio;
>
> - folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask);
> + folio = __folio_alloc(gfp, order, node, &cc->alloc_nmask);
> if (!folio) {
> *foliop = NULL;
> count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
Stats management is different for PMD-sized THP vs mTHP. All the PMD-sized THP
stats continue to be accumulated in /proc/meminfo (or whatever its called).
Other THP sizes are not accounted here. All mTHP sizes (*including* PMD-sized)
should accounted in
/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/stats/*. There is a file
for each stat.
We decided to do it this way for fear of breaking unenlightened user space that
only understands PMD-sized THP.
You can find the mTHP stats machinery at count_mthp_stat().
> @@ -1125,7 +1125,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> */
> mmap_read_unlock(mm);
>
> - result = alloc_charge_folio(&folio, mm, cc);
> + result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> goto out_nolock;
>
> @@ -1851,7 +1851,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
> VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
>
> - result = alloc_charge_folio(&new_folio, mm, cc);
> + result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> goto out;
>
Powered by blists - more mailing lists