[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100602145653.GA2385@linux.vnet.ibm.com>
Date: Wed, 2 Jun 2010 07:56:53 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Daniel J Blueman <daniel.blueman@...il.com>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious
rcu_dereference_check() usage
On Tue, Jun 01, 2010 at 02:06:13PM +0100, Daniel J Blueman wrote:
> Hi Paul,
>
> With 2.6.35-rc1 and your patch in the context below, we still see
> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
> protection!", so need this additional patch:
>
> Acquire read-side RCU lock around task_group() calls, addressing
> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
> protection!" warning.
>
> Signed-off-by: Daniel J Blueman <daniel.blueman@...il.com>
Thank you, Daniel! I have queued this for 2.6.35.
I had to apply the patch by hand due to line wrapping. Could you please
check your email-agent settings? This simple patch was no problem to
hand apply, but for a larger patch this process would be both tedious
and error prone.
Thanx, Paul
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 217e4a9..50ec9ea 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1241,6 +1241,7 @@ static int wake_affine(struct sched_domain *sd,
> struct task_struct *p, int sync)
> * effect of the currently running task from the load
> * of the current CPU:
> */
> + rcu_read_lock();
> if (sync) {
> tg = task_group(current);
> weight = current->se.load.weight;
> @@ -1250,6 +1251,7 @@ static int wake_affine(struct sched_domain *sd,
> struct task_struct *p, int sync)
> }
>
> tg = task_group(p);
> + rcu_read_unlock();
> weight = p->se.load.weight;
>
> imbalance = 100 + (sd->imbalance_pct - 100) / 2;
>
> ---
>
> On Wed, Apr 21, 2010 at 02:35:43PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 20, 2010 at 11:38:28AM -0400, Miles Lane wrote:
> Excellent. Here are the results on my machine. .config appended.
> First, thank you very much for testing this, Miles!
>
> And as Tetsuo Handa pointed out privately, my patch was way broken.
>
> Here is an updated version.
>
> Thanx, Paul
>
> commit b15e561ed91b7a366c3cc635026f3b9ce6483070
> Author: Paul E. McKenney <paulmck@...xxxxxxxxxxxxxxx>
> Date: Wed Apr 21 14:04:56 2010 -0700
>
> sched: protect __sched_setscheduler() access to cgroups
>
> A given task's cgroups structures must remain while that task is running
> due to reference counting, so this is presumably a false positive.
> Updated to reflect feedback from Tetsuo Handa.
>
> Signed-off-by: Paul E. McKenney <paulmck@...xxxxxxxxxxxxxxx>
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 14c44ec..f425a2b 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4575,9 +4575,13 @@ recheck:
> * Do not allow realtime tasks into groups that have no runtime
> * assigned.
> */
> + rcu_read_lock();
> if (rt_bandwidth_enabled() && rt_policy(policy) &&
> - task_group(p)->rt_bandwidth.rt_runtime == 0)
> + task_group(p)->rt_bandwidth.rt_runtime == 0) {
> + rcu_read_unlock();
> return -EPERM;
> + }
> + rcu_read_unlock();
> #endif
>
> retval = security_task_setscheduler(p, policy, param);
> --
> Daniel J Blueman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists