lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae4e3597-f664-e5c4-97fb-e07f230d5017@intel.com>
Date:   Tue, 21 Mar 2017 07:54:37 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Michal Hocko <mhocko@...nel.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>
Cc:     Aaron Lu <aaron.lu@...el.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Tim Chen <tim.c.chen@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ying Huang <ying.huang@...el.com>
Subject: Re: [PATCH v2 0/5] mm: support parallel free of memory

On 03/16/2017 02:07 AM, Michal Hocko wrote:
> On Wed 15-03-17 14:38:34, Tim Chen wrote:
>> max_active:   time
>> 1             8.9s   ±0.5%
>> 2             5.65s  ±5.5%
>> 4             4.84s  ±0.16%
>> 8             4.77s  ±0.97%
>> 16            4.85s  ±0.77%
>> 32            6.21s  ±0.46%
> 
> OK, but this will depend on the HW, right? Also now that I am looking at
> those numbers more closely. This was about unmapping 320GB area and
> using 4 times more CPUs you managed to half the run time. Is this really
> worth it? Sure if those CPUs were idle then this is a clear win but if
> the system is moderately busy then it doesn't look like a clear win to
> me.

This still suffers from zone lock contention.  It scales much better if
we are freeing memory from more than one zone.  We would expect any
other generic page allocator scalability improvements to really help
here, too.

Aaron, could you make sure to make sure that the memory being freed is
coming from multiple NUMA nodes?  It might also be interesting to boot
with a fake NUMA configuration with a *bunch* of nodes to see what the
best case looks like when zone lock contention isn't even in play where
one worker would be working on its own zone.

>>> Moreover, and this is a more generic question, is this functionality
>>> useful in general purpose workloads? 
>>
>> If we are running consecutive batch jobs, this optimization
>> should help start the next job sooner.
> 
> Is this sufficient justification to add a potentially hard to tune
> optimization that can influence other workloads on the machine?

The guys for whom a reboot is faster than a single exit() certainly
think so. :)

I have the feeling that we can find a pretty sane large process size to
be the floor where this feature gets activated.  I doubt the systems
that really care about noise from other workloads are often doing
multi-gigabyte mapping teardowns.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ