lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 2 Feb 2018 10:50:37 +0000
From:   Steven Whitehouse <swhiteho@...hat.com>
To:     Daniel Jordan <daniel.m.jordan@...cle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     aaron.lu@...el.com, ak@...ux.intel.com, akpm@...ux-foundation.org,
        Dave.Dice@...cle.com, dave@...olabs.net,
        khandual@...ux.vnet.ibm.com, ldufour@...ux.vnet.ibm.com,
        mgorman@...e.de, mhocko@...nel.org, pasha.tatashin@...cle.com,
        steven.sistare@...cle.com, yossi.lev@...cle.com
Subject: Re: [RFC PATCH v1 00/13] lru_lock scalability

Hi,


On 02/02/18 04:18, Daniel Jordan wrote:
>
>
> On 02/01/2018 10:54 AM, Steven Whitehouse wrote:
>> Hi,
>>
>>
>> On 31/01/18 23:04, daniel.m.jordan@...cle.com wrote:
>>> lru_lock, a per-node* spinlock that protects an LRU list, is one of the
>>> hottest locks in the kernel.  On some workloads on large machines, it
>>> shows up at the top of lock_stat.
>>>
>>> One way to improve lru_lock scalability is to introduce an array of 
>>> locks,
>>> with each lock protecting certain batches of LRU pages.
>>>
>>>          *ooooooooooo**ooooooooooo**ooooooooooo**oooo ...
>>>          |           ||           ||           ||
>>>           \ batch 1 /  \ batch 2 /  \ batch 3 /
>>>
>>> In this ASCII depiction of an LRU, a page is represented with either 
>>> '*'
>>> or 'o'.  An asterisk indicates a sentinel page, which is a page at the
>>> edge of a batch.  An 'o' indicates a non-sentinel page.
>>>
>>> To remove a non-sentinel LRU page, only one lock from the array is
>>> required.  This allows multiple threads to remove pages from different
>>> batches simultaneously.  A sentinel page requires lru_lock in 
>>> addition to
>>> a lock from the array.
>>>
>>> Full performance numbers appear in the last patch in this series, 
>>> but this
>>> prototype allows a microbenchmark to do up to 28% more page faults per
>>> second with 16 or more concurrent processes.
>>>
>>> This work was developed in collaboration with Steve Sistare.
>>>
>>> Note: This is an early prototype.  I'm submitting it now to support my
>>> request to attend LSF/MM, as well as get early feedback on the 
>>> idea.  Any
>>> comments appreciated.
>>>
>>>
>>> * lru_lock is actually per-memcg, but without memcg's in the picture it
>>>    becomes per-node.
>> GFS2 has an lru list for glocks, which can be contended under certain 
>> workloads. Work is still ongoing to figure out exactly why, but this 
>> looks like it might be a good approach to that issue too. The main 
>> purpose of GFS2's lru list is to allow shrinking of the glocks under 
>> memory pressure via the gfs2_scan_glock_lru() function, and it looks 
>> like this type of approach could be used there to improve the 
>> scalability,
>
> Glad to hear that this could help in gfs2 as well.
>
> Hopefully struct gfs2_glock is less space constrained than struct page 
> for storing the few bits of metadata that this approach requires.
>
> Daniel
>
We obviously want to keep gfs2_glock small, however within reason then 
yet we can add some additional fields as required. The use case is 
pretty much a standard LRU list, so items are added and removed, mostly 
at the active end of the list, and the inactive end of the list is 
scanned periodically by gfs2_scan_glock_lru()

Steve.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ