[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1311887581.2617.374.camel@laptop>
Date: Thu, 28 Jul 2011 23:13:01 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jesper Juhl <jj@...osbits.net>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH] sched: take rt_rq->rt_runtime_lock around
rt_rq->rt_runtime modification
On Thu, 2011-07-28 at 22:13 +0200, Jesper Juhl wrote:
> Everywhere (that I could find) where we modify rt_rq->rt_runtime we hold
> rt_rq->rt_runtime_lock except in alloc_rt_sched_group(). Shouldn't we do
> so - as per this patch. ???
>
> Signed-off-by: Jesper Juhl <jj@...osbits.net>
> ---
> kernel/sched.c | 2 ++
> 1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index ccacdbd..d5a3737 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -8488,7 +8488,9 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
> goto err_free_rq;
>
> init_rt_rq(rt_rq, cpu_rq(i));
> + raw_spin_lock(&rt_rq->rt_runtime_lock);
> rt_rq->rt_runtime = tg->rt_bandwidth.rt_runtime;
> + raw_spin_unlock(&rt_rq->rt_runtime_lock);
> init_tg_rt_entry(tg, rt_rq, rt_se, i, parent->rt_se[i]);
This is init code, the rt_rq is fresh and isn't exposed yet. There isn't
any concurrency.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists