[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1483371588-17140-1-git-send-email-urezki@gmail.com>
Date: Mon, 2 Jan 2017 16:39:46 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Uladzislau 2 Rezki <uladzislau2.rezki@...ymobile.com>
Subject: [RFC 1/3] sched: set loop_max after rq lock is taken
From: Uladzislau 2 Rezki <uladzislau2.rezki@...ymobile.com>
While doing a load balance there is a race in setting
loop_max variable since nr_running can be changed causing
incorect iteration loops.
As a result we may skip some candidates or check the same
tasks again.
Signed-off-by: Uladzislau 2 Rezki <uladzislau2.rezki@...ymobile.com>
---
kernel/sched/fair.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c242944..c5d9351 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7744,12 +7744,17 @@ static int load_balance(int this_cpu, struct rq *this_rq,
* correctly treated as an imbalance.
*/
env.flags |= LBF_ALL_PINNED;
- env.loop_max = min(sysctl_sched_nr_migrate, busiest->nr_running);
more_balance:
raw_spin_lock_irqsave(&busiest->lock, flags);
/*
+ * Set loop_max when rq's lock is taken to prevent a race.
+ */
+ env.loop_max = min(sysctl_sched_nr_migrate,
+ busiest->nr_running);
+
+ /*
* cur_ld_moved - load moved in current iteration
* ld_moved - cumulative load moved across iterations
*/
--
2.1.4
Powered by blists - more mailing lists