[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20181218053426.GP4170@linux.ibm.com>
Date: Mon, 17 Dec 2018 21:34:26 -0800
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
To: "Zhang, Jun" <jun.zhang@...el.com>
Cc: "He, Bo" <bo.he@...el.com>, Steven Rostedt <rostedt@...dmis.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"josh@...htriplett.org" <josh@...htriplett.org>,
"mathieu.desnoyers@...icios.com" <mathieu.desnoyers@...icios.com>,
"jiangshanlai@...il.com" <jiangshanlai@...il.com>,
"Xiao, Jin" <jin.xiao@...el.com>,
"Zhang, Yanmin" <yanmin.zhang@...el.com>,
"Bai, Jie A" <jie.a.bai@...el.com>,
"Sun, Yi J" <yi.j.sun@...el.com>,
"Chang, Junxiao" <junxiao.chang@...el.com>,
"Mei, Paul" <paul.mei@...el.com>
Subject: Re: rcu_preempt caused oom
On Tue, Dec 18, 2018 at 02:46:43AM +0000, Zhang, Jun wrote:
> Hello, paul
>
> In softirq context, and current is rcu_preempt-10, rcu_gp_kthread_wake don't wakeup rcu_preempt.
> Maybe next patch could fix it. Please help review.
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 0b760c1..98f5b40 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1697,7 +1697,7 @@ static bool rcu_future_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp)
> */
> static void rcu_gp_kthread_wake(struct rcu_state *rsp)
> {
> - if (current == rsp->gp_kthread ||
> + if (((current == rsp->gp_kthread) && !in_softirq()) ||
Close, but not quite. Please see below.
> !READ_ONCE(rsp->gp_flags) ||
> !rsp->gp_kthread)
> return;
>
> [44932.311439, 0][ rcu_preempt] rcu_preempt-10 [001] .n.. 44929.401037: rcu_grace_period: rcu_preempt 19063548 reqwait
> ......
> [44932.311517, 0][ rcu_preempt] rcu_preempt-10 [001] d.s2 44929.402234: rcu_future_grace_period: rcu_preempt 19063548 19063552 0 0 3 Startleaf
> [44932.311536, 0][ rcu_preempt] rcu_preempt-10 [001] d.s2 44929.402237: rcu_future_grace_period: rcu_preempt 19063548 19063552 0 0 3 Startedroot
Good catch! If the rcu_preempt kthread had just entered the function
swait_event_idle_exclusive(), which had just called __swait_event_idle()
which had just called ___swait_event(), which had just gotten done
checking the "condition", then yes, the rcu_preempt kthread could
sleep forever. This is a very narrow race window, but that matches
your experience with its not happening often -- and my experience with
it not happening at all.
However, for this to happen, the wakeup must happen within a softirq
handler that executes upon return from an interrupt that interrupted
___swait_event() just after the "if (condition)". For this, we don't want
in_softirq() but rather in_serving_softirq(), as shown in the patch below.
The patch you have above could result in spurious wakeups, as it is
checking for bottom halves being disabled, not just executing within a
softirq handler. Which might be better than not having enough wakeups,
but let's please try for just the right number. ;-)
So could you please instead test the patch below?
And if it works, could I please have your Signed-off-by so that I can
queue it? My patch is quite clearly derived from yours, after all!
And you should get credit for finding the problem and arriving at an
approximate fix, after all.
Thanx, Paul
------------------------------------------------------------------------
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index e9392a9d6291..b9205b40b621 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1722,7 +1722,7 @@ static bool rcu_future_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp)
*/
static void rcu_gp_kthread_wake(struct rcu_state *rsp)
{
- if (current == rsp->gp_kthread ||
+ if ((current == rsp->gp_kthread && !in_serving_softirq()) ||
!READ_ONCE(rsp->gp_flags) ||
!rsp->gp_kthread)
return;
Powered by blists - more mailing lists