[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160615225420.GB3923@linux.vnet.ibm.com>
Date: Wed, 15 Jun 2016 15:54:20 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 09/12] rcu: Make call_rcu_tasks() tolerate
first call with irqs disabled
On Thu, Jun 16, 2016 at 12:15:14AM +0200, Peter Zijlstra wrote:
> On Wed, Jun 15, 2016 at 02:46:10PM -0700, Paul E. McKenney wrote:
> > Currently, if the very first call to call_rcu_tasks() has irqs disabled,
> > it will create the rcu_tasks_kthread with irqs disabled, which will
> > result in a splat in the memory allocator, which kthread_run() invokes
> > with the expectation that irqs are enabled.
> >
> > This commit fixes this problem by deferring kthread creation if called
> > with irqs disabled. The first call to call_rcu_tasks() that has irqs
> > enabled will create the kthread.
> >
> > This bug was detected by rcutorture changes that were motivated by
> > Iftekhar Ahmed's mutation-testing efforts.
> >
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 218f8e83db73..4a3b279beb42 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -2175,7 +2175,7 @@ void task_numa_free(struct task_struct *p)
> >
> > grp->nr_tasks--;
> > spin_unlock_irqrestore(&grp->lock, flags);
> > - RCU_INIT_POINTER(p->numa_group, NULL);
> > + rcu_assign_pointer(p->numa_group, NULL);
> > put_numa_group(grp);
> > }
>
> This seems entirely unrelated; albeit desired given that other patch.
Yikes!
As you probably guessed, this was my test case for rcu_assign_pointer(NULL),
and I clearly failed to clean up after myself. It turns out that more than
30 few rcu_assign_pointer(NULL) instances have been added in the meantime,
several of which look to be in popular core code. So some testing will
happen.
But if you would like to have your code to also participate in this testing
effort, here is that patch standalone.
Thanx, Paul
------------------------------------------------------------------------
commit 0f11d148dfdb67b55efdc72a1a959c8e44c5d54c
Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Date: Wed Jun 15 15:52:25 2016 -0700
sched: Switch from RCU_INIT_POINTER() to rcu_assign_pointer()
Given that rcu_assign_pointer() now avoids providing memory ordering
when the value assigned is the constant NULL, this commit switches
task_numa_free() from RCU_INIT_POINTER() to rcu_assign_pointer().
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 218f8e83db73..4a3b279beb42 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2175,7 +2175,7 @@ void task_numa_free(struct task_struct *p)
grp->nr_tasks--;
spin_unlock_irqrestore(&grp->lock, flags);
- RCU_INIT_POINTER(p->numa_group, NULL);
+ rcu_assign_pointer(p->numa_group, NULL);
put_numa_group(grp);
}
Powered by blists - more mailing lists