[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081028173508.GI6779@linux.vnet.ibm.com>
Date: Tue, 28 Oct 2008 10:35:08 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Manfred Spraul <manfred@...orfullife.com>
Cc: linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
mingo@...e.hu, akpm@...ux-foundation.org, dipankar@...ibm.com,
josht@...ux.vnet.ibm.com, schamp@....com, niv@...ibm.com,
dvhltc@...ibm.com, ego@...ibm.com, laijs@...fujitsu.com,
rostedt@...dmis.org, peterz@...radead.org, penberg@...helsinki.fi,
andi@...stfloor.org, tglx@...utronix.de
Subject: Re: [PATCH, RFC] v7 scalable classic RCU implementation
On Tue, Oct 28, 2008 at 06:21:06PM +0100, Manfred Spraul wrote:
> Paul E. McKenney wrote:
>>> How do you intend to handle nohz cpus?
>>
>> In which variant of RCU? My current thought is to apply the rcutree.c
>> version to rcupreempt.c. If rcuclassic.c can be dropped, my thought
>> would be to leave it alone -- it is unnecessarily awakening CPUs, but
>> this is a non-fatal issue.
>>
> For rcuclassic.
If we were to keep rcuclassic for any length of time, I would modify
rcu_pending() and rcu_check_callbacks() to invoke force_quiescent_state()
if there was a longish (say 3-5 jiffies) delay in the RCU grace period.
> As far as I can see, rcuclassic treats nohz cpus as always outside
> rcu_read_lock():
> rcu_start_batch() contains
> >
> > cpus_andnot(rcp->cpumask, cpu_online_map, nohz_cpu_mask);
> >
> As soon as all cpus from rcp->cpumask reported a grace period, the
> callbacks are called.
> That a bug, therefore I would drop rcuclassic as soon as rcutree is merged.
Good point, I had forgotten that issue. Making this modification would
cause the resulting rcuclassic to be just as suspect as is rcutree,
I suppose.
A strong argument for moving to rcutree.c quickly rather than slowly,
I must admit!
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists