[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6355eb93-db4e-9e4f-557d-e10a053721a1@redhat.com>
Date: Wed, 3 Feb 2021 14:29:28 +0100
From: David Hildenbrand <david@...hat.com>
To: Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] x86/vmemmap: Drop handling of 4K unaligned vmemmap
range
On 03.02.21 11:47, Oscar Salvador wrote:
> remove_pte_table() is prepared to handle the case where either the
> start or the end of the range is not PAGE aligned.
> This cannot actually happen:
>
> __populate_section_memmap enforces the range to be PMD aligned,
> so as long as the size of the struct page remains multiple of 8,
> the vmemmap range will be aligned to PAGE_SIZE.
>
> Drop the dead code and place a VM_BUG_ON in vmemmap_{populate,free}
> to catch nasty cases.
>
> Signed-off-by: Oscar Salvador <osalvador@...e.de>
> Suggested-by: David Hildenbrand <david@...hat.com>
> ---
> arch/x86/mm/init_64.c | 48 ++++++++++++-------------------------------
> 1 file changed, 13 insertions(+), 35 deletions(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index b5a3fa4033d3..b0e1d215c83e 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -962,7 +962,6 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
> {
> unsigned long next, pages = 0;
> pte_t *pte;
> - void *page_addr;
> phys_addr_t phys_addr;
>
> pte = pte_start + pte_index(addr);
> @@ -983,42 +982,15 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
> if (phys_addr < (phys_addr_t)0x40000000)
> return;
>
> - if (PAGE_ALIGNED(addr) && PAGE_ALIGNED(next)) {
> - /*
> - * Do not free direct mapping pages since they were
> - * freed when offlining, or simplely not in use.
> - */
> - if (!direct)
> - free_pagetable(pte_page(*pte), 0);
> -
> - spin_lock(&init_mm.page_table_lock);
> - pte_clear(&init_mm, addr, pte);
> - spin_unlock(&init_mm.page_table_lock);
> + if (!direct)
> + free_pagetable(pte_page(*pte), 0);
>
> - /* For non-direct mapping, pages means nothing. */
> - pages++;
> - } else {
> - /*
> - * If we are here, we are freeing vmemmap pages since
> - * direct mapped memory ranges to be freed are aligned.
> - *
> - * If we are not removing the whole page, it means
> - * other page structs in this page are being used and
> - * we canot remove them. So fill the unused page_structs
> - * with 0xFD, and remove the page when it is wholly
> - * filled with 0xFD.
> - */
> - memset((void *)addr, PAGE_INUSE, next - addr);
> -
> - page_addr = page_address(pte_page(*pte));
> - if (!memchr_inv(page_addr, PAGE_INUSE, PAGE_SIZE)) {
> - free_pagetable(pte_page(*pte), 0);
> + spin_lock(&init_mm.page_table_lock);
> + pte_clear(&init_mm, addr, pte);
> + spin_unlock(&init_mm.page_table_lock);
>
> - spin_lock(&init_mm.page_table_lock);
> - pte_clear(&init_mm, addr, pte);
> - spin_unlock(&init_mm.page_table_lock);
> - }
> - }
> + /* For non-direct mapping, pages means nothing. */
> + pages++;
> }
>
> /* Call free_pte_table() in remove_pmd_table(). */
> @@ -1197,6 +1169,9 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct,
> void __ref vmemmap_free(unsigned long start, unsigned long end,
> struct vmem_altmap *altmap)
> {
> + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> +
> remove_pagetable(start, end, false, altmap);
> }
>
> @@ -1556,6 +1531,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> {
> int err;
>
> + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> +
> if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> err = vmemmap_populate_basepages(start, end, node, NULL);
> else if (boot_cpu_has(X86_FEATURE_PSE))
>
Reviewed-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists