[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1307442411.2322.246.camel@twins>
Date: Tue, 07 Jun 2011 12:26:51 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>
Cc: mingo@...e.hu, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: remove rcu_read_lock from wake_affine
On Tue, 2011-06-07 at 15:43 +0530, Nikunj A. Dadhania wrote:
> wake_affine is called from one path: select_task_rq_fair, which already has
> rcu read lock held.
>
> Signed-off-by: Nikunj A. Dadhania <nikunj@...ux.vnet.ibm.com>
> ---
> kernel/sched_fair.c | 3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 354e26b..0bfec93 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,
>
> #endif
>
> +/* Assumes rcu_read_lock is held */
Not a big fan of such comments, esp with CONFIG_PROVE_RCU its better to
use those facilities, which is to say: if we're missing a
rcu_read_lock() the thing will yell bloody murder.
> static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> {
> s64 this_load, load;
> @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> * effect of the currently running task from the load
> * of the current CPU:
> */
> - rcu_read_lock();
> if (sync) {
> tg = task_group(current);
> weight = current->se.load.weight;
> @@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> balanced = this_eff_load <= prev_eff_load;
> } else
> balanced = true;
> - rcu_read_unlock();
>
> /*
> * If the currently running task will sleep within
>
OK, took the patch and removed the comment, thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists