lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180216092959.gkm6d4j2zplk724r@gmail.com>
Date:   Fri, 16 Feb 2018 10:30:00 +0100
From:   Ingo Molnar <mingo@...nel.org>
To:     Pavel Tatashin <pasha.tatashin@...cle.com>
Cc:     steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
        akpm@...ux-foundation.org, mgorman@...hsingularity.net,
        mhocko@...e.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        gregkh@...uxfoundation.org, vbabka@...e.cz,
        bharata@...ux.vnet.ibm.com, tglx@...utronix.de, mingo@...hat.com,
        hpa@...or.com, x86@...nel.org, dan.j.williams@...el.com,
        kirill.shutemov@...ux.intel.com, bhe@...hat.com
Subject: Re: [v4 6/6] mm/memory_hotplug: optimize memory hotplug


* Pavel Tatashin <pasha.tatashin@...cle.com> wrote:

> During memory hotplugging we traverse struct pages three times:
> 
> 1. memset(0) in sparse_add_one_section()
> 2. loop in __add_section() to set do: set_page_node(page, nid); and
>    SetPageReserved(page);
> 3. loop in memmap_init_zone() to call __init_single_pfn()
> 
> This patch remove the first two loops, and leaves only loop 3. All struct
> pages are initialized in one place, the same as it is done during boot.

s/remove
 /removes

> The benefits:
> - We improve the memory hotplug performance because we are not evicting
>   cache several times and also reduce loop branching overheads.

s/We improve the memory hotplug performance
 /We improve memory hotplug performance

s/not evicting cache several times
 /not evicting the cache several times

s/overheads
 /overhead

> - Remove condition from hotpath in __init_single_pfn(), that was added in
>   order to fix the problem that was reported by Bharata in the above email
>   thread, thus also improve the performance during normal boot.

s/improve the performance
 /improve performance

> - Make memory hotplug more similar to boot memory initialization path
>   because we zero and initialize struct pages only in one function.

s/more similar to boot memory initialization path
 /more similar to the boot memory initialization path

> - Simplifies memory hotplug strut page initialization code, and thus
>   enables future improvements, such as multi-threading the initialization
>   of struct pages in order to improve the hotplug performance even further
>   on larger machines.

s/strut
 /struct

s/to improve the hotplug performance even further
 /to improve hotplug performance even further

> @@ -260,21 +260,12 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn,
>  		return ret;
>  
>  	/*
> -	 * Make all the pages reserved so that nobody will stumble over half
> -	 * initialized state.
> -	 * FIXME: We also have to associate it with a node because page_to_nid
> -	 * relies on having page with the proper node.
> +	 * The first page in every section holds node id, this is because we
> +	 * will need it in online_pages().

s/holds node id
 /holds the node id

> +#ifdef CONFIG_DEBUG_VM
> +	/*
> +	 * poison uninitialized struct pages in order to catch invalid flags
> +	 * combinations.

Please capitalize sentences properly.

> +	 */
> +	memset(memmap, PAGE_POISON_PATTERN,
> +	       sizeof(struct page) * PAGES_PER_SECTION);
> +#endif

I'd suggest writing this into a single line:

	memset(memmap, PAGE_POISON_PATTERN, sizeof(struct page)*PAGES_PER_SECTION);

(And ignore any checkpatch whinging - the line break didn't make it more 
readable.)

With those details fixed, and assuming that this patch was tested:

  Reviewed-by: Ingo Molnar <mingo@...nel.org>

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ