lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Jul 2012 10:50:41 -0700
From:	Tejun Heo <tj@...nel.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	tglx@...utronix.de, linux-pm@...r.kernel.org
Subject: Re: [PATCHSET] workqueue: reimplement CPU hotplug to keep idle
 workers

Hey, again.

On Fri, Jul 20, 2012 at 07:21:17PM +0200, Peter Zijlstra wrote:
> > So, the above was my rationale before this "we need to stop destroying
> > and re-creating kthreads across CPU hotplug events because phones do
> > it gazillion times".  Now, I don't think we have any other way.
> 
> OK, so why can't you splice the list of works from the CPU going down
> onto the list of the CPU doing the down and convert any busy worker
> threads to be bound to the cpu doing down?
> 
> That way there's nothing 'left' to get back to on up.

As I wrote above, per-cpu workqueues don't really interact with each
other and there's no mechanism to transfer work items from one to
another, which unfortunately isn't trivial due to backlinks from work
item to cpu workqueue which is necessary for flush / cancel
operations.  I'm sure it's doable but that part is already pretty
complex (already was before cmwq and untangling it probably requires
bloating work_struct) and I don't think it's wise to complicate usual
hot paths for hotplug support.

Also, re-binding busy workers is easy.  The idle ones are difficult
and we have to do that anyway for PM optimization.  What would be the
benefit of not re-binding busy ones at the risk of continually
transferring workers to another CPU given the right workload + CPU
down/up patterns?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists