lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Mar 2012 11:11:27 +1030
From:	Rusty Russell <rusty@...tcorp.com.au>
To:	paulmck@...ux.vnet.ibm.com
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>,
	Arjan van de Ven <arjan@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	"akpm\@linux-foundation.org" <akpm@...ux-foundation.org>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	Milton Miller <miltonm@....com>,
	"mingo\@elte.hu" <mingo@...e.hu>, Tejun Heo <tj@...nel.org>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Linux PM mailing list <linux-pm@...r.kernel.org>
Subject: Re: CPU Hotplug rework

On Fri, 23 Mar 2012 17:23:47 -0700, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> On Sat, Mar 24, 2012 at 09:57:32AM +1030, Rusty Russell wrote:
> > On Thu, 22 Mar 2012 15:49:20 -0700, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> > > On Thu, Mar 22, 2012 at 02:55:04PM +1030, Rusty Russell wrote:
> > > > On Wed, 21 Mar 2012 10:01:59 +0100, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> > > > > Thing is, if its really too much for some people, they can orchestrate
> > > > > it such that its not. Just move everybody in a cpuset, clear the to be
> > > > > offlined cpu from the cpuset's mask -- this will migrate everybody away.
> > > > > Then hotplug will find an empty runqueue and its fast, no?
> > > > 
> > > > I like this solution better.
> > > 
> > > As long as we have some way to handle kthreads that are algorithmically
> > > tied to a given CPU.  There are coding conventions to handle this, for
> > > example, do everything with preemption disabled and just after each
> > > preempt_disable() verify that you are in fact running on the correct
> > > CPU, but it is easy to imagine improvements.
> > 
> > I don't think we should move per-cpu kthreads at all.  Let's stop trying
> > to save a few bytes of memory, and just leave them frozen.  They'll run
> > again if/when the CPU returns.
> 
> OK, that would work for me.  So, how do I go about freezing RCU's
> per-CPU kthreads?

Good question.

Obviously, having callbacks hanging around until the CPU comes back is
not viable, nor is blocking preempt during the callbacks.  Calling
get_online_cpus() is too heavy.

I can think of three approaches:

1) Put the being-processed rcu calls into a per-cpu var, and pull them
   off that list with preempt disabled.  This lets us cleanup after the
   thread gets frozen as its CPU goes ofline, but doesn't solve the case
   of going offline during a callback.

2) Sync with the thread somehow during a notifier callback.  This is the
   same kind of logic as shutting the thread down, so it's not really
   attractive from a simplicity POV.

3) Create to a per-cpu rwsem to stop a specific CPU from going down, and
   just grab that while we're processing rcu callbacks.

If this pattern of kthread is common, then #3 (or some equiv lightwieght
way of stopping a specific CPU from going offline) is looking
attractive.

Cheers,
Rusty.
-- 
  How could I marry someone with more hair than me?  http://baldalex.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ