[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8bd06a6e-8d61-47aa-bb37-1916b18597da@lucifer.local>
Date: Tue, 12 Aug 2025 20:38:50 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
xen-devel@...ts.xenproject.org, linux-fsdevel@...r.kernel.org,
nvdimm@...ts.linux.dev, linuxppc-dev@...ts.ozlabs.org,
Andrew Morton <akpm@...ux-foundation.org>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Juergen Gross <jgross@...e.com>,
Stefano Stabellini <sstabellini@...nel.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
Dan Williams <dan.j.williams@...el.com>,
Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>,
Zi Yan <ziy@...dia.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Nico Pache <npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>, Barry Song <baohua@...nel.org>,
Jann Horn <jannh@...gle.com>, Pedro Falcato <pfalcato@...e.de>,
Hugh Dickins <hughd@...gle.com>, Oscar Salvador <osalvador@...e.de>,
Lance Yang <lance.yang@...ux.dev>,
Wei Yang <richard.weiyang@...il.com>
Subject: Re: [PATCH v3 10/11] mm: introduce and use vm_normal_page_pud()
On Mon, Aug 11, 2025 at 01:26:30PM +0200, David Hildenbrand wrote:
> Let's introduce vm_normal_page_pud(), which ends up being fairly simple
> because of our new common helpers and there not being a PUD-sized zero
> folio.
>
> Use vm_normal_page_pud() in folio_walk_start() to resolve a TODO,
> structuring the code like the other (pmd/pte) cases. Defer
> introducing vm_normal_folio_pud() until really used.
>
> Note that we can so far get PUDs with hugetlb, daxfs and PFNMAP entries.
I guess hugetlb will be handled in a separate way, daxfs will be... special, I
think? and PFNMAP definitely is.
>
> Reviewed-by: Wei Yang <richard.weiyang@...il.com>
> Reviewed-by: Oscar Salvador <osalvador@...e.de>
> Signed-off-by: David Hildenbrand <david@...hat.com>
Anyway this is nice, thanks! Nice to resolve the todo :)
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> ---
> include/linux/mm.h | 2 ++
> mm/memory.c | 19 +++++++++++++++++++
> mm/pagewalk.c | 20 ++++++++++----------
> 3 files changed, 31 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b626d1bacef52..8ca7d2fa71343 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2360,6 +2360,8 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
> unsigned long addr, pmd_t pmd);
> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> pmd_t pmd);
> +struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long addr,
> + pud_t pud);
>
> void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
> unsigned long size);
> diff --git a/mm/memory.c b/mm/memory.c
> index 78af3f243cee7..6f806bf3cc994 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -809,6 +809,25 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
> return page_folio(page);
> return NULL;
> }
> +
> +/**
> + * vm_normal_page_pud() - Get the "struct page" associated with a PUD
> + * @vma: The VMA mapping the @pud.
> + * @addr: The address where the @pud is mapped.
> + * @pud: The PUD.
> + *
> + * Get the "struct page" associated with a PUD. See __vm_normal_page()
> + * for details on "normal" and "special" mappings.
> + *
> + * Return: Returns the "struct page" if this is a "normal" mapping. Returns
> + * NULL if this is a "special" mapping.
> + */
> +struct page *vm_normal_page_pud(struct vm_area_struct *vma,
> + unsigned long addr, pud_t pud)
> +{
> + return __vm_normal_page(vma, addr, pud_pfn(pud), pud_special(pud),
> + pud_val(pud), PGTABLE_LEVEL_PUD);
> +}
> #endif
>
> /**
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index 648038247a8d2..c6753d370ff4e 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -902,23 +902,23 @@ struct folio *folio_walk_start(struct folio_walk *fw,
> fw->pudp = pudp;
> fw->pud = pud;
>
> - /*
> - * TODO: FW_MIGRATION support for PUD migration entries
> - * once there are relevant users.
> - */
> - if (!pud_present(pud) || pud_special(pud)) {
> + if (pud_none(pud)) {
> spin_unlock(ptl);
> goto not_found;
> - } else if (!pud_leaf(pud)) {
> + } else if (pud_present(pud) && !pud_leaf(pud)) {
> spin_unlock(ptl);
> goto pmd_table;
> + } else if (pud_present(pud)) {
> + page = vm_normal_page_pud(vma, addr, pud);
> + if (page)
> + goto found;
> }
> /*
> - * TODO: vm_normal_page_pud() will be handy once we want to
> - * support PUD mappings in VM_PFNMAP|VM_MIXEDMAP VMAs.
> + * TODO: FW_MIGRATION support for PUD migration entries
> + * once there are relevant users.
> */
> - page = pud_page(pud);
> - goto found;
> + spin_unlock(ptl);
> + goto not_found;
> }
>
> pmd_table:
> --
> 2.50.1
>
Powered by blists - more mailing lists