lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Jun 2020 19:44:29 -0700
From:   Rick Lindsley <ricklind@...ux.vnet.ibm.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Ian Kent <raven@...maw.net>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Stephen Rothwell <sfr@...b.auug.org.au>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Al Viro <viro@...iv.linux.org.uk>,
        David Howells <dhowells@...hat.com>,
        Miklos Szeredi <miklos@...redi.hu>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency
 improvement

On 6/19/20 3:23 PM, Tejun Heo wrote:

> Spending 5 minutes during boot creating sysfs objects doesn't seem like a
> particularly good solution and I don't know whether anyone else would
> experience similar issues. Again, not necessarily against improving the
> scalability of kernfs code but the use case seems a bit out there.

Creating sysfs objects is not a new "solution".  We already do it, and it can take over an hour on a machine with 64TB of memory.  We're not *adding* this ... we're *improving* something that has been around for a long time.

>> They are used for hotplugging and partitioning memory. The size of the
>> segments (and thus the number of them) is dictated by the underlying
>> hardware.
> 
> This sounds so bad. There gotta be a better interface for that, right?

Again; this is not new.  Easily changing the state of devices by writing new values into kernfs is one of the reasons kernfs exists.

     echo 0 > /sys/devices//system/memory/memory10374/online

and boom - you've taken memory chunk 10374 offline.

These changes are not just a whim.  I used lockstat to measure contention during boot.  The addition of 250,000 "devices" in parallel created tremendous contention on the kernfs_mutex and, it appears, on one of the directories within it where memory nodes are created.  Using a mutex means that the details of that mutex must bounce around all the cpus ... did I mention 1500+ cpus?  A whole lot of thrash ...

These patches turn that mutex into a rw semaphore, and any thread checking for the mere existence of something can get by with a read lock. lockstat showed that about 90% of the mutex contentions were in a read path and only 10% in a write path.  Switching to a rw semaphore meant that 90% of the time there was no waiting and the thread proceeded with its caches intact.  Total time spent waiting on either read or write decreased by 75%, and should scale much better with subsequent hardware upgrades.

With contention on this particular data structure reduced, we saw thrash on related control structures decrease markedly too.  The contention for the lockref taken in dput dropped 66% and, likely due to reduced thrash, the time used waiting for that structure dropped 99%.

Rick

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ