lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 14 Jul 2015 16:04:32 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Oleg Nesterov <oleg@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Rik van Riel <riel@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/5] kmod: Use system_unbound_wq instead of khelper

On Fri, Jul 10, 2015 at 02:05:56PM -0500, Christoph Lameter wrote:
> On Fri, 10 Jul 2015, Frederic Weisbecker wrote:
> 
> > Note that nohz full is perfectly fine with that. The issue I'm worried about
> > is the case where drivers spawn hundreds of jobs and it all happen on the same
> > node because the kernel threads inherit the workqueue affinity, instead of
> > the global affinity that khelper had.
> 
> Well if this is working as intended here then the kernel threads will only
> run on a specific cpu. As far as we can tell the amout of kernel threads
> spawned is rather low

Quite high actually. I count 578 calls on my machine. Most of them are launched
by crypto subsystem trying to load modules. And it takes more than one second to
complete all of these requests...

> and also the performance requirements on those
> threads are low.

I think it is sensitive given the possible high number of instances launched. Now
at least the crypto subsystem hasn't optimized that at all because all these
instances are serialized. Basically on my machine, all of them run on CPU 0.

Now I'm worried about other configs that may launch loads of parallel
usermodehelper threads. That said I tend to think that if such a thing hasn't
been seen as a problem on small SMP systems, why would it be an issue if we
affine them on a NUMA node that is usually at least 4 CPUs wide? Or is it possible
to see lower numbers of CPUs in a NUMA node?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ