lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170324070428.GA7258@aaronlu.sh.intel.com>
Date:   Fri, 24 Mar 2017 15:04:28 +0800
From:   Aaron Lu <aaron.lu@...el.com>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Tim Chen <tim.c.chen@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ying Huang <ying.huang@...el.com>
Subject: Re: [PATCH v2 0/5] mm: support parallel free of memory

On Tue, Mar 21, 2017 at 07:54:37AM -0700, Dave Hansen wrote:
> On 03/16/2017 02:07 AM, Michal Hocko wrote:
> > On Wed 15-03-17 14:38:34, Tim Chen wrote:
> >> max_active:   time
> >> 1             8.9s   ±0.5%
> >> 2             5.65s  ±5.5%
> >> 4             4.84s  ±0.16%
> >> 8             4.77s  ±0.97%
> >> 16            4.85s  ±0.77%
> >> 32            6.21s  ±0.46%
> > 
> > OK, but this will depend on the HW, right? Also now that I am looking at
> > those numbers more closely. This was about unmapping 320GB area and
> > using 4 times more CPUs you managed to half the run time. Is this really
> > worth it? Sure if those CPUs were idle then this is a clear win but if
> > the system is moderately busy then it doesn't look like a clear win to
> > me.
> 
> This still suffers from zone lock contention.  It scales much better if
> we are freeing memory from more than one zone.  We would expect any
> other generic page allocator scalability improvements to really help
> here, too.
> 
> Aaron, could you make sure to make sure that the memory being freed is
> coming from multiple NUMA nodes?  It might also be interesting to boot
> with a fake NUMA configuration with a *bunch* of nodes to see what the
> best case looks like when zone lock contention isn't even in play where
> one worker would be working on its own zone.

This fake NUMA configuration thing is great for this purpose, I didn't
know we have this support in kernel.

So I added numa=fake=128 and also wrote a new test program(attached)
that mmap() 321G memory and made sure they are distributed equally in
107 nodes, i.e. 3G on each node. This is achieved by using mbind before
touching the memory on each node.

Then I enlarged the max_gather_batch_count to 1543 so that during zap,
3G memory is sent to a kworker for free instead of the default 1G. In
this way, each kworker should be working on a different node.

With this change, time to free the 321G memory is reduced to:

	3.23s ±13.7%  (about 70% decrease)

Lock contention is 1.81%:

        19.60%  [kernel.kallsyms]  [k] release_pages
        13.30%  [kernel.kallsyms]  [k] unmap_page_range
        13.18%  [kernel.kallsyms]  [k] free_pcppages_bulk
         8.34%  [kernel.kallsyms]  [k] __mod_zone_page_state
         7.75%  [kernel.kallsyms]  [k] page_remove_rmap
         7.37%  [kernel.kallsyms]  [k] free_hot_cold_page
         6.06%  [kernel.kallsyms]  [k] free_pages_and_swap_cache
         3.53%  [kernel.kallsyms]  [k] __list_del_entry_valid
         3.09%  [kernel.kallsyms]  [k] __list_add_valid
         1.81%  [kernel.kallsyms]  [k] native_queued_spin_lock_slowpath
         1.79%  [kernel.kallsyms]  [k] uncharge_list
         1.69%  [kernel.kallsyms]  [k] mem_cgroup_update_lru_size
         1.60%  [kernel.kallsyms]  [k] vm_normal_page
         1.46%  [kernel.kallsyms]  [k] __dec_node_state
         1.41%  [kernel.kallsyms]  [k] __mod_node_page_state
         1.20%  [kernel.kallsyms]  [k] __tlb_remove_page_size
         0.85%  [kernel.kallsyms]  [k] mem_cgroup_page_lruvec

>From 'vmstat 1', the runnable process peaked at 6 during munmap():
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0 189114560      0 761292    0    0     0     0   70  146  0  0 100  0  0
 3  0      0 189099008      0 759932    0    0     0     0 2536  382  0  0 100  0  0
 6  0      0 274378848      0 759972    0    0     0     0 11332  249  0  3 97  0  0
 5  0      0 374426592      0 759972    0    0     0     0 13576  196  0  3 97  0  0
 4  0      0 474990144      0 759972    0    0     0     0 13250  227  0  3 97  0  0
 0  0      0 526039296      0 759972    0    0     0     0 6799  246  0  2 98  0  0
^C

This appears to be the best result from this approach.

View attachment "node_alloc.c" of type "text/plain" (1970 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ