[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXGe6jx=dZ3Xe8Cz-xD0pHUaDCyKB4Shb4B=U5vAWXcdRw@mail.gmail.com>
Date: Fri, 16 May 2025 10:51:55 +0100
From: Ard Biesheuvel <ardb@...nel.org>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Jonathan Corbet <corbet@....net>, Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, Jan Kiszka <jan.kiszka@...mens.com>,
Kieran Bingham <kbingham@...nel.org>, Michael Roth <michael.roth@....com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>, Brijesh Singh <brijesh.singh@....com>,
Sandipan Das <sandipan.das@....com>, Juergen Gross <jgross@...e.com>,
Tom Lendacky <thomas.lendacky@....com>, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-efi@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv2 2/3] x86/64/mm: Make SPARSEMEM_VMEMMAP the only memory model
On Fri, 16 May 2025 at 10:15, Kirill A. Shutemov
<kirill.shutemov@...ux.intel.com> wrote:
>
> 5-level paging only supports SPARSEMEM_VMEMMAP. CONFIG_X86_5LEVEL is
> being phased out, making 5-level paging support mandatory.
>
> Make CONFIG_SPARSEMEM_VMEMMAP mandatory for x86-64 and eliminate
> any associated conditional statements.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Reviewed-by: Ard Biesheuvel <ardb@...nel.org>
> ---
> arch/x86/Kconfig | 2 +-
> arch/x86/mm/init_64.c | 9 +--------
> 2 files changed, 2 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index d3c2da3b2f0b..45b36a019b5e 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1467,7 +1467,6 @@ config X86_PAE
> config X86_5LEVEL
> bool "Enable 5-level page tables support"
> default y
> - select SPARSEMEM_VMEMMAP
> depends on X86_64
> help
> 5-level paging enables access to larger address space:
> @@ -1579,6 +1578,7 @@ config ARCH_SPARSEMEM_ENABLE
> def_bool y
> select SPARSEMEM_STATIC if X86_32
> select SPARSEMEM_VMEMMAP_ENABLE if X86_64
> + select SPARSEMEM_VMEMMAP if X86_64
>
> config ARCH_SPARSEMEM_DEFAULT
> def_bool X86_64 || (NUMA && X86_32)
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index bf45c7aed336..66330fe4e18c 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -833,7 +833,6 @@ void __init paging_init(void)
> zone_sizes_init();
> }
>
> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
> #define PAGE_UNUSED 0xFD
>
> /*
> @@ -932,7 +931,6 @@ static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long
> if (!IS_ALIGNED(end, PMD_SIZE))
> unused_pmd_start = end;
> }
> -#endif
>
> /*
> * Memory hotplug specific functions
> @@ -1152,16 +1150,13 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end,
> pmd_clear(pmd);
> spin_unlock(&init_mm.page_table_lock);
> pages++;
> - }
> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
> - else if (vmemmap_pmd_is_unused(addr, next)) {
> + } else if (vmemmap_pmd_is_unused(addr, next)) {
> free_hugepage_table(pmd_page(*pmd),
> altmap);
> spin_lock(&init_mm.page_table_lock);
> pmd_clear(pmd);
> spin_unlock(&init_mm.page_table_lock);
> }
> -#endif
> continue;
> }
>
> @@ -1500,7 +1495,6 @@ unsigned long memory_block_size_bytes(void)
> return memory_block_size_probed;
> }
>
> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
> /*
> * Initialise the sparsemem vmemmap using huge-pages at the PMD level.
> */
> @@ -1647,4 +1641,3 @@ void __meminit vmemmap_populate_print_last(void)
> node_start = 0;
> }
> }
> -#endif
> --
> 2.47.2
>
Powered by blists - more mailing lists