lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Sep 2016 21:17:43 +0530
From:   "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
To:     Reza Arbab <arbab@...ux.vnet.ibm.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Rob Herring <robh+dt@...nel.org>,
        Frank Rowand <frowand.list@...il.com>,
        Jonathan Corbet <corbet@....net>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Bharata B Rao <bharata@...ux.vnet.ibm.com>,
        Nathan Fontenot <nfont@...ux.vnet.ibm.com>,
        Stewart Smith <stewart@...ux.vnet.ibm.com>,
        Alistair Popple <apopple@....ibm.com>,
        Balbir Singh <bsingharora@...il.com>,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        linuxppc-dev@...ts.ozlabs.org, devicetree@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH v3 4/5] powerpc/mm: restore top-down allocation when using movable_node

Reza Arbab <arbab@...ux.vnet.ibm.com> writes:

> At boot, the movable_node option sets bottom-up memblock allocation.
>
> This reduces the chance that, in the window before movable memory has
> been identified, an allocation for the kernel might come from a movable
> node. By going bottom-up, early allocations will most likely come from
> the same node as the kernel image, which is necessarily in a nonmovable
> node.
>
> Then, once any known hotplug memory has been marked, allocation can be
> reset back to top-down. On x86, this is done in numa_init(). This patch
> does the same on power, in numa initmem_init().
>
> Signed-off-by: Reza Arbab <arbab@...ux.vnet.ibm.com>
> ---
>  arch/powerpc/mm/numa.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
> index d7ac419..fdf1e69 100644
> --- a/arch/powerpc/mm/numa.c
> +++ b/arch/powerpc/mm/numa.c
> @@ -945,6 +945,9 @@ void __init initmem_init(void)
>  	max_low_pfn = memblock_end_of_DRAM() >> PAGE_SHIFT;
>  	max_pfn = max_low_pfn;
>
> +	/* bottom-up allocation may have been set by movable_node */
> +	memblock_set_bottom_up(false);
> +

By then we have done few memblock allocation right ? IMHO, we should do
this early enough in prom.c after we do parse_early_param, with a
comment there explaining that, we don't really support hotplug memblock
and when we do that, this should be moved to a place where we can handle
memblock allocation such that we avoid spreading memblock allocation to
movable node.


>  	if (parse_numa_properties())
>  		setup_nonnuma();
>  	else
> -- 
> 1.8.3.1

-aneesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ