lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1342804877.2583.42.camel@twins>
Date:	Fri, 20 Jul 2012 19:21:17 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	tglx@...utronix.de, linux-pm@...r.kernel.org
Subject: Re: [PATCHSET] workqueue: reimplement CPU hotplug to keep idle
 workers

On Fri, 2012-07-20 at 10:02 -0700, Tejun Heo wrote:
> Hey, Peter.
> 
> On Fri, Jul 20, 2012 at 05:48:31PM +0200, Peter Zijlstra wrote:
> > On Tue, 2012-07-17 at 10:12 -0700, Tejun Heo wrote:
> > > While this makes rebinding somewhat more complicated, as it has to be
> > > able to rebind idle workers too, it allows overall hotplug path to be
> > > much simpler.  
> > 
> > I really don't see the point of re-binding.. at that point you've well
> > and proper violated any per-cpu expectation, so why not complete running
> > the works on the disassociated thing and let new works accrue on the
> > per-cpu things again?
> 
> We've discussed this a couple times now, so the existing reasons were,
> 
> * Local affinity is more often used as a form of affinity optimization
>   since the beginning.  This, mixed with queue_work() /
>   queue_work_on(), does make things muddy.
> 
> * With local affinity used for optimization, we better support
>   detaching running workers - before cmwq, this used to be one of the
>   sources of trouble during power state changes.
> 
> * So, we have unbound workers which started as bound while a CPU is
>   down.  When the CPU comes back up again, we can do one of the
>   followings - 1. migrate the unbound ones to WORK_CPU_UNBOUND (can
>   also do this on CPU_DOWN), 2. leave them unbound and keep them
>   running in parallel with bound ones, or 3. rebind them.  #2 is the
>   hariest - it contaminates the usual !hotplug code paths.  #1 or #3,
>   unsure, but given how global_cwq's don't usually interact with each
>   other, I thought #3 would be lower impact on hot paths.
> 
> So, the above was my rationale before this "we need to stop destroying
> and re-creating kthreads across CPU hotplug events because phones do
> it gazillion times".  Now, I don't think we have any other way.

OK, so why can't you splice the list of works from the CPU going down
onto the list of the CPU doing the down and convert any busy worker
threads to be bound to the cpu doing down?

That way there's nothing 'left' to get back to on up.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ