lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200620162752.GF8681@bombadil.infradead.org>
Date:   Sat, 20 Jun 2020 09:27:52 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     "Eric W. Biederman" <ebiederm@...ssion.com>
Cc:     Junxiao Bi <junxiao.bi@...cle.com>, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org,
        Matthew Wilcox <matthew.wilcox@...cle.com>,
        Srinivas Eeda <SRINIVAS.EEDA@...cle.com>,
        "joe.jin@...cle.com" <joe.jin@...cle.com>,
        Wengang Wang <wen.gang.wang@...cle.com>
Subject: Re: [PATCH] proc: Avoid a thundering herd of threads freeing proc
 dentries

On Fri, Jun 19, 2020 at 05:42:45PM -0500, Eric W. Biederman wrote:
> Junxiao Bi <junxiao.bi@...cle.com> writes:
> > Still high lock contention. Collect the following hot path.
> 
> A different location this time.
> 
> I know of at least exit_signal and exit_notify that take thread wide
> locks, and it looks like exit_mm is another.  Those don't use the same
> locks as flushing proc.
> 
> 
> So I think you are simply seeing a result of the thundering herd of
> threads shutting down at once.  Given that thread shutdown is fundamentally
> a slow path there is only so much that can be done.
> 
> If you are up for a project to working through this thundering herd I
> expect I can help some.  It will be a long process of cleaning up
> the entire thread exit process with an eye to performance.

Wengang had some tests which produced wall-clock values for this problem,
which I agree is more informative.

I'm not entirely sure what the customer workload is that requires a
highly threaded workload to also shut down quickly.  To my mind, an
overall workload is normally composed of highly-threaded tasks that run
for a long time and only shut down rarely (thus performance of shutdown
is not important) and single-threaded tasks that run for a short time.

Understanding this workload is important to my next suggestion, which
is that rather than searching for all the places in the exit path which
contend on a single spinlock, we simply set the allowed CPUs for an
exiting task to include only the CPU that this thread is running on.
It will probably run faster to take the threads down in series on one
CPU rather than take them down in parallel across many CPUs (or am I
mistaken?  Is there inherently a lot of parallelism in the thread
exiting process?)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ