lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Jun 2010 21:39:27 +0200
From:	Tejun Heo <tj@...nel.org>
To:	Stefan Richter <stefanr@...6.in-berlin.de>
CC:	Andrew Morton <akpm@...ux-foundation.org>, mingo@...e.hu,
	awalls@...ix.net, linux-kernel@...r.kernel.org, jeff@...zik.org,
	rusty@...tcorp.com.au, cl@...ux-foundation.org,
	dhowells@...hat.com, arjan@...ux.intel.com,
	johannes@...solutions.net, oleg@...hat.com, axboe@...nel.dk
Subject: Re: [PATCHSET] workqueue: concurrency managed workqueue, take#5

Hello,

On 06/15/2010 08:15 PM, Stefan Richter wrote:
>>>From what I understood, this is about the following:
> 
>   - Right now, a workqueue is backed by either 1 or by #_of_CPUs
>     kernel threads.  There is no other option.
> 
>   - To avoid creating half a million of kernel threads, driver authors
>     resort to either
>        - using the globally shared workqueue even if they might queue
>          high-latency work in corner cases,
>     or
>        - creating a single-threaded workqueue even if they put unrelated
>          jobs into that queue that should better be executed in
>          parallel, not serially.
>     (I for one have both cases in drivers/firewire/, and I have similar
>     issues in the old drivers/ieee1394/.)
> 
> The cmwq patch series reforms workqueues to be backed by a global thread
> pool.  Hence:
> 
>   + Driver authors can and should simply register one queue for any one
>     purpose now.  They don't need to worry anymore about having too many
>     or too few backing threads.

Wq now serves more as a flushing and max-inflight controlling domain,
so unless it needs to flush the workqueue (as opposed to each work) or
throttle max-inflight or might be used during allocation path (in
which case emergency worker should also be used), default wq should
work fine too.

>   + [A side effect:  In some cases, a driver that currently uses a
>     thread pool can be simplified by migrating to the workqueue API.]
> 
> Tejun, please correct me if I misunderstood.

Yeap, from driver's POV, mostly precise.  The reason I started this
whole thing is that I was trying to implement in-kernel media presence
polling (mostly for cdroms but may also useful for polling other stuff
for other types of devices) and I got immediately stuck on how to
manage concurrency.

I can create single kthread per drive which should work fine in most
cases but there are configurations with a lot of devices and it's not
only wasteful but might actually cause scalability issues.  For most
common cases, ST or MT wq could be enough but then again when
something gets stuck (unfortunately somewhat common with cheap
drives), the whole thing will get stuck.  So, I was thinking about
creating a worker pool for it and managing concurrency, which felt
very silly.  I just needed some context to host those pollings on
demand and this is not something I should be worrying about when I'm
trying to implement media presence polling.

I think there are many similar situations for drivers.  I already
wrote about libata but it's just silly to worry about how to manage
execution contexts for polling PIO, EH and hotplug from individual
drivers and drivers often have to make suboptimal choices because it's
not worth solving fully at that layer.  So, cmwq tries to provide an
easy way to get hold of execution contexts on demand.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ