lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Dec 2009 23:19:34 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Arjan van de Ven <arjan@...ux.intel.com>
CC:	Jens Axboe <jens.axboe@...cle.com>,
	Andi Kleen <andi@...stfloor.org>,
	Peter Zijlstra <peterz@...radead.org>,
	torvalds@...ux-foundation.org, awalls@...ix.net,
	linux-kernel@...r.kernel.org, jeff@...zik.org, mingo@...e.hu,
	akpm@...ux-foundation.org, rusty@...tcorp.com.au,
	cl@...ux-foundation.org, dhowells@...hat.com, avi@...hat.com,
	johannes@...solutions.net
Subject: Re: workqueue thing

Hello, Arjan.

On 12/21/2009 10:53 PM, Arjan van de Ven wrote:
> I'm just wondering if even that is overkill; I suspect you can do
> entirely without the scheduler intrusion;
> just make a new thread for each work item, with some hesteresis:
>
> * threads should stay around for a bit before dying (you do that)
> * after some minimum nr of threads (say 4 per cpu), you wait, say, 0.1
> seconds before deciding it's time
>   to spawn more threads, to smooth out spikes of very short lived stuff.
> 
> wouldn't that be a lot simpler than "ask the scheduler to see if
> they are all blocked". If they are all very busy churning cpu (say
> doing raid6 work, or btrfs checksumming) you still would want more
> threads I suspect

Ah... okay, there are two aspects cmwq invovles the scheduler.

A. Concurrency management.  This is achieved by the scheduler
   callbacks which watches how many workers are working.

B. Deadlock avoidance.  This requires migrating rescuers to CPUs under
   allocation distress.  The problem here is that
   set_cpus_allowed_ptr() doesn't allow migrating tasks to CPUs which
   are online but !active (CPU_DOWN_PREPARE).

B would be necessary in whichever way you implement shared worker pool
unless you create all the workers which might possibly be necessary
for allocation.

For A, it's far more efficient and robust with scheduler callbacks.
It's conceptually pretty simple too.  If you look at the patch which
actually implements the dynamic pool, the amount of code necessary for
implementing this part isn't that big.  Most of complexity in the
series comes from trying to sharing workers not the dynamic pool
management.  Even if it switches to timer based one, there simply
won't be much reduction in complexity.  So, I don't think there's any
reason to choose rather fragile heuristics when it can be implemented
in a pretty mechanical way.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ