[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z_XmUrbxKtYmzmJ6@casper.infradead.org>
Date: Wed, 9 Apr 2025 04:15:30 +0100
From: Matthew Wilcox <willy@...radead.org>
To: nifan.cxl@...il.com
Cc: muchun.song@...ux.dev, mcgrof@...nel.org, a.manzanares@...sung.com,
dave@...olabs.net, akpm@...ux-foundation.org, david@...hat.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Fan Ni <fan.ni@...sung.com>
Subject: Re: [PATCH] mm/hugetlb: Convert &folio->page to folio_page(folio, 0)
On Tue, Apr 08, 2025 at 05:49:10PM -0700, nifan.cxl@...il.com wrote:
> From: Fan Ni <fan.ni@...sung.com>
>
> Convert the use of &folio->page to folio_page(folio, 0) where struct
> filio fits in. This is part of the efforts to move some fields out of
> struct page to reduce its size.
Thanks for sending the patch. You've mixed together quite a few things;
I'd suggest focusing on one API at a time.
> folio_get(folio);
> - folio_add_file_rmap_pmd(folio, &folio->page, vma);
> + folio_add_file_rmap_pmd(folio, folio_page(folio, 0), vma);
> add_mm_counter(mm, mm_counter_file(folio), HPAGE_PMD_NR);
I think this is fine, but would defer to David Hildenbrand.
> folio_get(folio);
> - folio_add_file_rmap_pud(folio, &folio->page, vma);
> + folio_add_file_rmap_pud(folio, folio_page(folio, 0), vma);
> add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR);
If that is fine, then so is this (put them in the same patchset).
> spin_unlock(ptl);
> - if (flush_needed)
> - tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
> + if (flush_needed) {
> + tlb_remove_page_size(tlb, folio_page(folio, 0),
> + HPAGE_PMD_SIZE);
> + }
You don't need to add the extra braces here. I haven't looked into this
family of APIs; not sure if we should be passing the folio here or
if it should be taking a folio argument.
> if (folio_maybe_dma_pinned(src_folio) ||
> - !PageAnonExclusive(&src_folio->page)) {
> + !PageAnonExclusive(folio_page(src_folio, 0))) {
> err = -EBUSY;
mmm. Another David question.
> for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
> - struct page *new_head = &folio->page + i;
> + struct page *new_head = folio_page(folio, i);
>
This is definitely the right thing to do.
> @@ -3403,7 +3405,7 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> if (new_order)
> folio_set_order(folio, new_order);
> else
> - ClearPageCompound(&folio->page);
> + ClearPageCompound(folio_page(folio, 0));
> }
I might be inclined to leave this one alone; this whole function needs
to be rewritten as part of the folio split.
> folio_split_memcg_refs(folio, old_order, split_order);
> - split_page_owner(&folio->page, old_order, split_order);
> + split_page_owner(folio_page(folio, 0), old_order, split_order);
> pgalloc_tag_split(folio, old_order, split_order);
Not sure if split_folio_owner is something that should exist. Haven't
looked into it.
> */
> - free_page_and_swap_cache(&new_folio->page);
> + free_page_and_swap_cache(folio_page(new_folio, 0));
> }
free_page_and_swap_cache() should be converted to be
free_folio_and_swap_cache().
>
> - return __folio_split(folio, new_order, &folio->page, page, list, true);
> + return __folio_split(folio, new_order, folio_page(folio, 0), page,
> + list, true);
> }
Probably right.
> {
> - return __folio_split(folio, new_order, split_at, &folio->page, list,
> - false);
> + return __folio_split(folio, new_order, split_at, folio_page(folio, 0),
> + list, false);
> }
Ditto.
>
> - return split_huge_page_to_list_to_order(&folio->page, list, ret);
> + return split_huge_page_to_list_to_order(folio_page(folio, 0), list,
> + ret);
> }
Ditto.
>
> - if (is_migrate_isolate_page(&folio->page))
> + if (is_migrate_isolate_page(folio_page(folio, 0)))
> continue;
I think we need an is_migrate_isolate_folio() instead of this.
> if (folio_test_anon(folio))
> - __ClearPageAnonExclusive(&folio->page);
> + __ClearPageAnonExclusive(folio_page(folio, 0));
> folio->mapping = NULL;
... David.
>
> - split_page_owner(&folio->page, huge_page_order(src), huge_page_order(dst));
> + split_page_owner(folio_page(folio, 0), huge_page_order(src),
> + huge_page_order(dst));
See earlier.
> if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) {
> - if (!PageAnonExclusive(&old_folio->page)) {
> + if (!PageAnonExclusive(folio_page(old_folio, 0))) {
> folio_move_anon_rmap(old_folio, vma);
> - SetPageAnonExclusive(&old_folio->page);
> + SetPageAnonExclusive(folio_page(old_folio, 0));
> }
David.
> }
> VM_BUG_ON_PAGE(folio_test_anon(old_folio) &&
> - PageAnonExclusive(&old_folio->page), &old_folio->page);
> + PageAnonExclusive(folio_page(old_folio, 0)),
> + folio_page(old_folio, 0));
The PageAnonExclusive() part of this change is for David to comment on,
but this should be a VM_BUG_ON_FOLIO() instead of calling folio_page()
to keep this a VM_BUG_ON_PAGE().
>
> - unmap_ref_private(mm, vma, &old_folio->page,
> - vmf->address);
> + unmap_ref_private(mm, vma, folio_page(old_folio, 0),
> + vmf->address);
unmap_ref_private() only has one caller (this one), so make that take a
folio. This is a whole series, all by itself.
> hugetlb_cgroup_migrate(old_folio, new_folio);
> - set_page_owner_migrate_reason(&new_folio->page, reason);
> + set_page_owner_migrate_reason(folio_page(new_folio, 0), reason);
>
See earlier about page owner being folio or page based.
> int ret;
> - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end;
> + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end;
> unsigned long vmemmap_reuse;
Probably right.
> int ret = 0;
> - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end;
> + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end;
> unsigned long vmemmap_reuse;
Ditto.
> - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end;
> + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end;
> unsigned long vmemmap_reuse;
Ditto.
> */
> - spfn = (unsigned long)&folio->page;
> + spfn = (unsigned long)folio_page(folio, 0);
Ditto.
> register_page_bootmem_memmap(pfn_to_section_nr(spfn),
> - &folio->page,
> - HUGETLB_VMEMMAP_RESERVE_SIZE);
> + folio_page(folio, 0),
> + HUGETLB_VMEMMAP_RESERVE_SIZE);
Don't change the indentation, but looks right.
> result = SCAN_SUCCEED;
> - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
> - referenced, writable, result);
> + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0),
> + none_or_zero, referenced,
> + writable, result);
> return result;
trace_mm_collapse_huge_page_isolate() should take a folio.
> release_pte_pages(pte, _pte, compound_pagelist);
> - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
> - referenced, writable, result);
> + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0),
> + none_or_zero, referenced,
> + writable, result);
> return result;
ditto.
> out:
> - trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
> - none_or_zero, result, unmapped);
> + trace_mm_khugepaged_scan_pmd(mm, folio_page(folio, 0), writable,
> + referenced, none_or_zero, result,
> + unmapped);
> return result;
ditto,
> result = install_pmd
> - ? set_huge_pmd(vma, haddr, pmd, &folio->page)
> + ? set_huge_pmd(vma, haddr, pmd, folio_page(folio, 0))
> : SCAN_SUCCEED;
I feel that set_huge_pmd() should take a folio.
Powered by blists - more mailing lists