lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20190301125144.1f4ce76a8e7bcd2688181f48@linux-foundation.org>
Date:   Fri, 1 Mar 2019 12:51:44 -0800
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Qian Cai <cai@....pw>
Cc:     mhocko@...nel.org, benh@...nel.crashing.org, paulus@...ba.org,
        mpe@...erman.id.au, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Arun KS <arunks@...eaurora.org>
Subject: Re: [PATCH v2] mm/hotplug: fix an imbalance with DEBUG_PAGEALLOC

On Fri,  1 Mar 2019 15:19:50 -0500 Qian Cai <cai@....pw> wrote:

> When onlining a memory block with DEBUG_PAGEALLOC, it unmaps the pages
> in the block from kernel, However, it does not map those pages while
> offlining at the beginning. As the result, it triggers a panic below
> while onlining on ppc64le as it checks if the pages are mapped before
> unmapping. However, the imbalance exists for all arches where
> double-unmappings could happen. Therefore, let kernel map those pages in
> generic_online_page() before they have being freed into the page
> allocator for the first time where it will set the page count to one.
> 
> On the other hand, it works fine during the boot, because at least for
> IBM POWER8, it does,
> 
> early_setup
>   early_init_mmu
>     harsh__early_init_mmu
>       htab_initialize [1]
>         htab_bolt_mapping [2]
> 
> where it effectively map all memblock regions just like
> kernel_map_linear_page(), so later mem_init() -> memblock_free_all()
> will unmap them just fine without any imbalance. On other arches without
> this imbalance checking, it still unmap them once at the most.
> 
> [1]
> for_each_memblock(memory, reg) {
>         base = (unsigned long)__va(reg->base);
>         size = reg->size;
> 
>         DBG("creating mapping for region: %lx..%lx (prot: %lx)\n",
>                 base, size, prot);
> 
>         BUG_ON(htab_bolt_mapping(base, base + size, __pa(base),
>                 prot, mmu_linear_psize, mmu_kernel_ssize));
>         }
> 
> [2] linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
> 
> kernel BUG at arch/powerpc/mm/hash_utils_64.c:1815!
>
> ...
>
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -660,6 +660,7 @@ static void generic_online_page(struct page *page)
>  {
>  	__online_page_set_limits(page);
>  	__online_page_increment_counters(page);
> +	kernel_map_pages(page, 1, 1);
>  	__online_page_free(page);
>  }

This code was changed a lot by Arun's "mm/page_alloc.c: memory hotplug:
free pages as higher order".

I don't think hotplug+DEBUG_PAGEALLOC is important enough to disrupt
memory_hotplug-free-pages-as-higher-order.patch, which took a long time
to sort out.  So could you please take a look at linux-next, determine
whether the problem is still there and propose a suitable patch?

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ