lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4B1F59EC.3030400@kernel.org>
Date:	Wed, 09 Dec 2009 17:03:56 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	tglx@...utronix.de, mingo@...e.hu, avi@...hat.com, efault@....de,
	rusty@...tcorp.com.au, linux-kernel@...r.kernel.org,
	Gautham R Shenoy <ego@...ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 4/7] sched: implement force_cpus_allowed()

Hello,

On 12/09/2009 04:41 PM, Peter Zijlstra wrote:
> On Wed, 2009-12-09 at 14:25 +0900, Tejun Heo wrote:
>> As for the force_cpus_allowed() bit, I think it's a rather natural
>> interface to have and maybe we can replace kthread_bind() with it or
>> make kthread_bind() in terms of it.  It's the basic migration function
>> which adheres to the cpu hot plug/unplug synchronization rules.
> 
> I quite disagree, its quite unnatural to be adding threads to a cpu that
> is about to go down or about to come up for that matter, and we most
> certainly don't want to add to the hotplug rules, there are quite enough
> of them already.

I don't think it adds to hotplug rules in any way.  While cpu is going
down, there's this single atomic moment when all threads are taken off
the cpu and that's the natural synchronization point.  You can add
more restrictions on top of that like the cpu activeness test to avoid
certain desirable behavior but in the end the natural ultimate sync
point is the final migration off the cpu.  That said, let's agree to
disagree here.  We're talking about what's being 'natural' and it
depends largely on personal POVs, right?

> You could always do it from CPU_ONLINE, since they don't care about
> cpu-affinity anyway (the thing was down for crying out loud), it really
> doesn't matter when they're moved back if at all.
> 
> I still think its utter insanity to even consider moving them back, or
> for that matter to have worklets that take minutes to complete, that's
> just daft.
>
> I think I'm going to NAK all this, it looks quite ill conceived.

Alright, fair enough.  So, I get that you NACK the whole concurrency
managed workqueue thing.  The reason why this part was split out and
sent separately for scheduler tree was that because I thought there
was general consensus toward having concurrency managed workqueues and
its basic design.  It looks like we'll need to have another round
about that.

Here's what I'll do.  I'm about done with the second round, properly
split cmwq patches.  It's feature complete and should be able to
replace existing workqueue implementation without any problem (well,
theoretically).  I'll send them as a whole series with these scheduler
patches and cc all the interested people and explain Ingo's concern
about notification framework and your general objection to the whole
thing.  Let's continue there, okay?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ