[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1206132050270.3086@ionos>
Date: Wed, 13 Jun 2012 20:56:18 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
cc: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Tejun Heo <tj@...nel.org>
Subject: Re: [RFC patch 2/5] smpboot: Provide infrastructure for percpu
hotplug threads
On Wed, 13 Jun 2012, Paul E. McKenney wrote:
> On Wed, Jun 13, 2012 at 11:00:54AM -0000, Thomas Gleixner wrote:
>
> So I am currently trying to apply this to RCU's per-CPU kthread.
> I don't believe that I need to mess with RCU's per-rcu_node kthread
> because it can just have its affinity adjusted when the first CPU
> onlines and the last CPU offlines for the corresponding rcu_node.
>
> One question below about the order of parking.
>
> Also, I have not yet figured out how this avoids a parked thread waking
> up while the CPU is offline, but I am probably still missing something.
If it's just a spurious wakeup then it goes back to sleep right away
as nothing cleared the park bit.
If something calls unpark(), then it's toast. I should put a warning
into the code somewhere to catch that case.
> > +void smpboot_park_threads(unsigned int cpu)
> > +{
> > + struct smp_hotplug_thread *cur;
> > +
> > + mutex_lock(&smpboot_threads_lock);
> > + list_for_each_entry(cur, &hotplug_threads, list)
>
> Shouldn't this be list_for_each_entry_reverse()? Yes, the notifiers
> still run in the same order for both online and offline, but all uses
> of smpboot_park_threads() would be new, so should be OK with the
> proper ordering, right?
Duh, yes
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists