lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Dec 2009 22:18:11 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Arjan van de Ven <arjan@...ux.intel.com>
CC:	Andi Kleen <andi@...stfloor.org>,
	Jens Axboe <jens.axboe@...cle.com>,
	Peter Zijlstra <peterz@...radead.org>,
	torvalds@...ux-foundation.org, awalls@...ix.net,
	linux-kernel@...r.kernel.org, jeff@...zik.org, mingo@...e.hu,
	akpm@...ux-foundation.org, rusty@...tcorp.com.au,
	cl@...ux-foundation.org, dhowells@...hat.com, avi@...hat.com,
	johannes@...solutions.net
Subject: Re: workqueue thing

Hello, Arjan.

On 12/21/2009 08:17 PM, Arjan van de Ven wrote:
>>> One would hope not, since that is by no means outside of what you see on
>>> boxes today... Thousands. The fact that they are cheap, is not an
>>> argument against doing it right. Conceptually, I think the concurrency
>>> managed work queue pool is a much cleaner (and efficient) design.
>>
>> Agreed. Even if possible thousands of threads waste precious cache.
> 
> only used ones waste cache ;-)

Yes and using dedicated threads increases the number of used stacks.
ie. with cmwq, in most cases, only few stacks would be active and
shared among different works.  With workqueues with dedicated workers,
different type of works will always end up using different stacks thus
unnecessarily increasing cache footprint.

>> And they look ugly in ps.
> 
> that we could solve by making them properly threads of each other;
> ps and co already (at least by default) fold threads of the same
> program into one.

That way poses two unnecessary problems.  It will easily incur a
scalability issue.  ie. I've been thinking about making block EHs
per-device so that per-device EH actions can be implemented which
won't block the whole host.  If I do this with dedicated threads and
allocate single thread per block device, it will be the easiest,
right?  The problem is that there are machines with tens of thousands
of LUNs (not that uncommon either) and such design would simply
collapse there.

Such potential scalability issues thus would require special crafting
at the block layer to manage concurrency to gurantee both EH forward
progress and proper level of concurrency without paying too much
upfront.  We'll need another partial solution to solve concurrency
there and it never stops there.  What about in-kernel media presence
polling?  Or what about ATA PIO pollers?

>> Also the nice thing about dynamically sizing the thread pool
>> is that if something bad (error condition that takes long) happens
>> in one work queue for a specific subsystem there's still a chance
>> to make process with other operations in the same subsystem.
> 
> yup same is true for hitting some form of contention; just make an
> extra thread so that the rest can continue.

cmwq tries to do exactly that.  It uses scheduler notificatinos to
detect those contentions and creates new workers if everything is
blocked.  The reason why rescuers are required is to guarantee forward
progress in creating workers.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ