lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Dec 2009 13:18:40 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	torvalds@...ux-foundation.org, awalls@...ix.net,
	linux-kernel@...r.kernel.org, jeff@...zik.org, mingo@...e.hu,
	akpm@...ux-foundation.org, jens.axboe@...cle.com,
	rusty@...tcorp.com.au, cl@...ux-foundation.org,
	dhowells@...hat.com, arjan@...ux.intel.com, avi@...hat.com,
	johannes@...solutions.net, andi@...stfloor.org
Subject: Re: workqueue thing

Hello,

On 12/22/2009 08:06 PM, Peter Zijlstra wrote:
> On Tue, 2009-12-22 at 08:50 +0900, Tejun Heo wrote:
>>
>>>  3) gets fragile at memory-pressure/reclaim
>>
>> Shared dynamic pool is going to be affected by memory pressure no
>> matter how you implement it.  cmwq tries to maintain stable level of
>> workers and has forward progress guarantee.  If you're gonna do shared
>> pool, it can't get much better. 
> 
> And here I'm questioning the very need for shared stuff, I don't see
> any. That is, I'm not seeing it being worth the hassle.

Then you see the situation pretty different from the way I do.  Maybe
it's caused by the different things we work on.  Whenever I want to
create something which would need async context, I'm always faced with
these tradeoffs that I think is silly to worry about at that layer.
It ends up scattering partial solutions all over the place.

libata has two workqueues just because one may depend on the other.
The workqueue used for polling is MT to increase parallelism in case
there are multiple devices which would require polling but it's both
wasteful and not enough - they won't be used most of the time but they
aren't enough when there are multiple pollers on the same CPU.  libata
just had to make a rather mediocre in-the-middle tradeoff between
having one poller for each device and sharing single poller for all
devices.

The same goes for EH threads.  How often they are used heavily depends
on the system configuration.  For example, libata handles ATAPI CHECK
CONDITION as an exception and acquire sense data from the exception
handler and it happens pretty frequently.  So, I want to have
per-device EHs and have ideas on how to escalate from device level EH
to host level EH.  The problem here again is how to maintain the
concurrency because having a single kthread for each block device
won't be acceptable from scalability POV.

Another similar but less severe problem is in-kernel media presence
pollers.  Here, I think I can have a single poller for each device
without having too many scalability issues but it just isn't efficient
because most of the time one poller would be enough.  It's only when
you get to the corner cases or error conditions when you would need
more than one.  So, again, I can implement a special poller pool for
this one.

And there are slow work and async both of which are there just to
provide process context to tasks which may take quite some time to
complete waiting for IOs and quite a few ST workqueues which got
separated out because they somehow got involved in some obscure
deadlock condition and the only reason they're ST is because MT would
create too many threads.  CPU affinity would work better for them but
they have to make these tradeoffs.

So, if we can have a mehanism which can solve these issues, it's an
obvious plus.  Shifting complexity out of peripheral code to better
crafted and managed core code is the right thing to do and it will
shift a lot of complexity out of peripheral codes.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ