lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 May 2015 13:14:44 -0400
From:	Waiman Long <waiman.long@...com>
To:	Mel Gorman <mgorman@...e.de>
CC:	Daniel J Blueman <daniel@...ascale.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	nzimmer <nzimmer@....com>, Dave Hansen <dave.hansen@...el.com>,
	Scott Norton <scott.norton@...com>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Steffen Persvold <sp@...ascale.com>
Subject: Re: [PATCH] mm: meminit: Finish initialisation of struct pages before
 basic setup

On 05/22/2015 05:33 AM, Mel Gorman wrote:
> On Fri, May 22, 2015 at 02:30:01PM +0800, Daniel J Blueman wrote:
>> On Thu, May 14, 2015 at 6:03 PM, Daniel J Blueman
>> <daniel@...ascale.com>  wrote:
>>> On Thu, May 14, 2015 at 12:31 AM, Mel Gorman<mgorman@...e.de>  wrote:
>>>> On Wed, May 13, 2015 at 10:53:33AM -0500, nzimmer wrote:
>>>>> I am just noticed a hang on my largest box.
>>>>> I can only reproduce with large core counts, if I turn down the
>>>>> number of cpus it doesn't have an issue.
>>>>>
>>>> Odd. The number of core counts should make little a difference
>>>> as only
>>>> one CPU per node should be in use. Does sysrq+t give any
>>>> indication how
>>>> or where it is hanging?
>>> I was seeing the same behaviour of 1000ms increasing to 5500ms
>>> [1]; this suggests either lock contention or O(n) behaviour.
>>>
>>> Nathan, can you check with this ordering of patches from Andrew's
>>> cache [2]? I was getting hanging until I a found them all.
>>>
>>> I'll follow up with timing data.
>> 7TB over 216 NUMA nodes, 1728 cores, from kernel 4.0.4 load to login:
>>
>> 1. 2086s with patches 01-19 [1]
>>
>> 2. 2026s adding "Take into account that large system caches scale
>> linearly with memory", which has:
>> min(2UL<<  (30 - PAGE_SHIFT), (pgdat->node_spanned_pages>>  3));
>>
>> 3. 2442s fixing to:
>> max(2UL<<  (30 - PAGE_SHIFT), (pgdat->node_spanned_pages>>  3));
>>
>> 4. 2064s adjusting minimum and shift to:
>> max(512UL<<  (20 - PAGE_SHIFT), (pgdat->node_spanned_pages>>  8));
>>
>> 5. 1934s adjusting minimum and shift to:
>> max(128UL<<  (20 - PAGE_SHIFT), (pgdat->node_spanned_pages>>  8));
>>
>> 6. 930s #5 with the non-temporal PMD init patch I had earlier
>> proposed (I'll pursue separately)
>>
>> The scaling patch isn't in -mm.
> That patch was superceded by "mm: meminit: finish
> initialisation of struct pages before basic setup" and
> "mm-meminit-finish-initialisation-of-struct-pages-before-basic-setup-fix"
> so that's ok.
>
> FWIW, I think you should still go ahead with the non-temporal patches because
> there is potential benefit there other than the initialisation.  If there
> was an arch-optional implementation of a non-termporal clear then it would
> also be worth considering if __GFP_ZERO should use non-temporal stores.
> At a greater stretch it would be worth considering if kswapd freeing should
> zero pages to avoid a zero on the allocation side in the general case as
> it would be more generally useful and a stepping stone towards what the
> series "Sanitizing freed pages" attempts.

I think the non-temporal patch benefits mainly AMD systems. I have tried 
the patch on both DragonHawk and it actually made it boot up a little 
bit slower. I think the Intel optimized "rep stosb" instruction (used in 
memset) is performing well. I had done similar test on zero page code 
and the performance gain was non-conclusive.

Cheers,
Longman

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ