[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110607101251.777.34547.stgit@IBM-009124035060.in.ibm.com>
Date: Tue, 07 Jun 2011 15:43:22 +0530
From: "Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>
To: peterz@...radead.org, mingo@...e.hu
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH] sched: remove rcu_read_lock from wake_affine
wake_affine is called from one path: select_task_rq_fair, which already has
rcu read lock held.
Signed-off-by: Nikunj A. Dadhania <nikunj@...ux.vnet.ibm.com>
---
kernel/sched_fair.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 354e26b..0bfec93 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,
#endif
+/* Assumes rcu_read_lock is held */
static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
{
s64 this_load, load;
@@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
* effect of the currently running task from the load
* of the current CPU:
*/
- rcu_read_lock();
if (sync) {
tg = task_group(current);
weight = current->se.load.weight;
@@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
balanced = this_eff_load <= prev_eff_load;
} else
balanced = true;
- rcu_read_unlock();
/*
* If the currently running task will sleep within
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists