[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CEDDB2A.2020807@cn.fujitsu.com>
Date: Thu, 25 Nov 2010 11:42:34 +0800
From: Lai Jiangshan <laijs@...fujitsu.com>
To: Frederic Weisbecker <fweisbec@...il.com>
CC: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 0/2] rcu: Fix series of spurious RCU softirqs
On 11/24/2010 08:31 AM, Frederic Weisbecker wrote:
> Hi,
>
> I've observed some not so unfrequent series of spurious rcu
> softirqs, sometimes happening at each ticks for a random
> while.
>
> These patches aims at fixing them.
>
> Thanks.
>
> Frederic Weisbecker (2):
> rcu: Don't chase unnecessary quiescent states after extended grace periods
> rcu: Stop checking quiescent states after grace period completion from remote
>
If we ensure rdp->gpnum >= rdp->completed is always true, the problems as
you described will not be existed. Or maybe I misunderstand you.
rdp->gpnum >= rdp->completed is a very important guarantee I think.
(In my RCURING, it is guaranteed.) I'm afraid there are some other
problems still hidden if it is not guaranteed.
so I recommend: (code is better than words)
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index d5bc439..af4e87a 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -648,6 +648,13 @@ __rcu_process_gp_end(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_dat
/* Remember that we saw this grace-period completion. */
rdp->completed = rnp->completed;
+
+ /* Ensure ->gpnum >= ->completed after NO_HZ */
+ if (unlikely(rnp->completed - rdp->gpnum > 0
+ || rdp->gpnum - rnp->gpnum > 0)) {
+ rdp->gpnum = rnp->completed;
+ rdp->qs_pending = 0;
+ }
}
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists