[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zf8T1hztUyslmW51@x1n>
Date: Sat, 23 Mar 2024 13:39:34 -0400
From: Peter Xu <peterx@...hat.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: SeongJae Park <sj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Jason Gunthorpe <jgg@...dia.com>, Mike Rapoport <rppt@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
kernel test robot <lkp@...el.com>
Subject: Re: [PATCH] mm/arch: Provide pud_pfn() fallback
On Sat, Mar 23, 2024 at 11:16:43AM -0400, peterx@...hat.com wrote:
> From: Peter Xu <peterx@...hat.com>
>
> The comment in the code explains the reasons. We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
>
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings. However that can be an overkill.
>
> Cc: Mike Rapoport (IBM) <rppt@...nel.org>
> Cc: Matthew Wilcox <willy@...radead.org>
> Reported-by: kernel test robot <lkp@...el.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202403231529.HRev1zcD-lkp@intel.com/
Also:
Closes: https://lore.kernel.org/oe-kbuild-all/202403240112.kHKVSfCL-lkp@intel.com/
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
>
> Andrew,
>
> If we care about per-commit build errors (and if it is ever feasible to
> reorder), we can move this patch to be before the patch "mm/gup: handle
> huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
> commit.
>
> Thanks,
> ---
> arch/riscv/include/asm/pgtable.h | 1 +
> arch/s390/include/asm/pgtable.h | 1 +
> arch/sparc/include/asm/pgtable_64.h | 1 +
> arch/x86/include/asm/pgtable.h | 1 +
> include/linux/pgtable.h | 10 ++++++++++
> 5 files changed, 14 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 20242402fc11..0ca28cc8e3fa 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>
> #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 1a71cb19c089..6cbbe473f680 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
> return (unsigned long)__va(pud_val(pud) & origin_mask);
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> return __pa(pud_deref(pud)) >> PAGE_SHIFT;
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 4d1bafaba942..26efc9bb644a 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
> return pte_val(pte) & _PAGE_PMD_HUGE;
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> pte_t pte = __pte(pud_val(pud));
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index cefc7a84f7a4..273f7557218c 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
> return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
> }
>
> +#define pud_pfn pud_pfn
> static inline unsigned long pud_pfn(pud_t pud)
> {
> phys_addr_t pfn = pud_val(pud);
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 2a1c044ae467..deae9e50f1a8 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
> #define pte_leaf_size(x) PAGE_SIZE
> #endif
>
> +/*
> + * We always define pmd_pfn for all archs as it's used in lots of generic
> + * code. Now it happens too for pud_pfn (and can happen for larger
> + * mappings too in the future; we're not there yet). Instead of defining
> + * it for all archs (like pmd_pfn), provide a fallback.
> + */
> +#ifndef pud_pfn
> +#define pud_pfn(x) ({ BUILD_BUG(); 0; })
> +#endif
> +
> /*
> * Some architectures have MMUs that are configurable or selectable at boot
> * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
> --
> 2.44.0
>
--
Peter Xu
Powered by blists - more mailing lists