lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 21 Aug 2009 08:58:09 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Tejun Heo <htejun@...il.com>
Cc:	linux-kernel@...r.kernel.org, jeff@...zik.org,
	benh@...nel.crashing.org, bzolnier@...il.com,
	alan@...rguk.ukuu.org.uk
Subject: Re: [PATCH 0/6] Lazy workqueues

On Thu, Aug 20 2009, Tejun Heo wrote:
> Hello, Jens.
> 
> Jens Axboe wrote:
> > After yesterdays rant on having too many kernel threads and checking
> > how many I actually have running on this system (531!), I decided to 
> > try and do something about it.
> 
> Heh... that's a lot.  How many cpus do you have there?  Care to share
> the output of "ps -ef"?

That system has 64 cpus. ps -ef attached.

> > My goal was to retain the workqueue interface instead of coming up with
> > a new scheme that required conversion (or converting to slow_work which,
> > btw, is an awful name :-). I also wanted to retain the affinity
> > guarantees of workqueues as much as possible.
> > 
> > So this is a first step in that direction, it's probably full of races
> > and holes, but should get the idea across. It adds a
> > create_lazy_workqueue() helper, similar to the other variants that we
> > currently have. A lazy workqueue works like a normal workqueue, except
> > that it only (by default) starts a core thread instead of threads for
> > all online CPUs. When work is queued on a lazy workqueue for a CPU
> > that doesn't have a thread running, it will be placed on the core CPUs
> > list and that will then create and move the work to the right target.
> > Should task creation fail, the queued work will be executed on the
> > core CPU instead. Once a lazy workqueue thread has been idle for a
> > certain amount of time, it will again exit.
> 
> Yeap, the approach seems simple and nice and resolves the problem of
> too many idle workers.

I think so too :-)

> > The patch boots here and I exercised the rpciod workqueue and
> > verified that it gets created, runs on the right CPU, and exits a while
> > later. So core functionality should be there, even if it has holes.
> > 
> > With this patchset, I am now down to 280 kernel threads on one of my test
> > boxes. Still too many, but it's a start and a net reduction of 251
> > threads here, or 47%!
> 
> I'm trying to find out whether the perfect concurrency idea I talked
> about on the other thread can be implemented in reasonable manner.
> Would you mind holding for a few days before investing too much effort
> into expanding this one to handle multiple workers?

No problem, I'll just get the races closed up in the existing version.

I think we basically have two classes of users here - one that the
existing workqueue scheme works well for, high performance work
execution where CPU affinity matters. The other is just slow work
execution (like the libata pio task stuff), which would be better
handled by a generic thread pool implementation. I think we should start
converting those users to slow_work, in fact I think I'll try libata to
try and set a good example :-)

-- 
Jens Axboe


View attachment "ps-ef.txt" of type "text/plain" (25897 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ