[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <89b5e714-c352-81c2-429b-cd057a18a5a0@redhat.com>
Date: Thu, 4 Feb 2021 17:06:13 +0100
From: David Hildenbrand <david@...hat.com>
To: Oscar Salvador <osalvador@...e.de>, akpm@...ux-foundation.org
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/3] x86/vmemmap: Drop handling of 1GB vmemmap ranges
On 04.02.21 14:43, Oscar Salvador wrote:
> We never get to allocate 1GB pages when mapping the vmemmap range.
> Drop the dead code both for the aligned and unaligned cases and leave
> only the direct map handling.
>
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
> Suggested-by: David Hildenbrand <david@...hat.com>
> ---
> arch/x86/mm/init_64.c | 35 +++++++----------------------------
> 1 file changed, 7 insertions(+), 28 deletions(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index b0e1d215c83e..9ecb3c488ac8 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1062,7 +1062,6 @@ remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
> unsigned long next, pages = 0;
> pmd_t *pmd_base;
> pud_t *pud;
> - void *page_addr;
>
> pud = pud_start + pud_index(addr);
> for (; addr < end; addr = next, pud++) {
> @@ -1071,33 +1070,13 @@ remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
> if (!pud_present(*pud))
> continue;
>
> - if (pud_large(*pud)) {
> - if (IS_ALIGNED(addr, PUD_SIZE) &&
> - IS_ALIGNED(next, PUD_SIZE)) {
> - if (!direct)
> - free_pagetable(pud_page(*pud),
> - get_order(PUD_SIZE));
> -
> - spin_lock(&init_mm.page_table_lock);
> - pud_clear(pud);
> - spin_unlock(&init_mm.page_table_lock);
> - pages++;
> - } else {
> - /* If here, we are freeing vmemmap pages. */
> - memset((void *)addr, PAGE_INUSE, next - addr);
> -
> - page_addr = page_address(pud_page(*pud));
> - if (!memchr_inv(page_addr, PAGE_INUSE,
> - PUD_SIZE)) {
> - free_pagetable(pud_page(*pud),
> - get_order(PUD_SIZE));
> -
> - spin_lock(&init_mm.page_table_lock);
> - pud_clear(pud);
> - spin_unlock(&init_mm.page_table_lock);
> - }
> - }
> -
> + if (pud_large(*pud) &&
> + IS_ALIGNED(addr, PUD_SIZE) &&
> + IS_ALIGNED(next, PUD_SIZE)) {
> + spin_lock(&init_mm.page_table_lock);
> + pud_clear(pud);
> + spin_unlock(&init_mm.page_table_lock);
> + pages++;
> continue;
> }
>
>
Reviewed-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists