lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210602122555.10082-1-laoar.shao@gmail.com>
Date:   Wed,  2 Jun 2021 20:25:55 +0800
From:   Yafang Shao <laoar.shao@...il.com>
To:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....co,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com
Cc:     linux-kernel@...r.kernel.org, Yafang Shao <laoar.shao@...il.com>
Subject: [PATCH 1/1] sched: do active load balance on the new idle cpu

We monitored our latency-sensitive RT tasks are randomly preempted by the
kthreads migration/n, which means to migrate tasks on CPUn to other new
idle CPU. The logical as follows,

  new idle CPU				CPU n
  (no task to run)              	(busy running)
  wakeup migration/n			(busy running)
  (idle)                        	migraion/n preempts current task
  run the migrated task			(busy running)

As the new idle CPU is going to be idle, we'd better move the migration
work on it instead of burdening the busy CPU. After this change, the
logic is,
 new idle CPU				CPU n
 (no task to run) 			(busy running)
 migrate task from CPU n		(busy running)
 run the migrated task			(busy running)

Signed-off-by: Yafang Shao <laoar.shao@...il.com>
---
 kernel/sched/fair.c | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3248e24a90b0..3e8b98b982ff 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9807,13 +9807,11 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 				busiest->push_cpu = this_cpu;
 				active_balance = 1;
 			}
-			raw_spin_unlock_irqrestore(&busiest->lock, flags);
 
-			if (active_balance) {
-				stop_one_cpu_nowait(cpu_of(busiest),
-					active_load_balance_cpu_stop, busiest,
-					&busiest->active_balance_work);
-			}
+			if (active_balance)
+				active_load_balance_cpu_stop(busiest);
+
+			raw_spin_unlock_irqrestore(&busiest->lock, flags);
 		}
 	} else {
 		sd->nr_balance_failed = 0;
@@ -9923,7 +9921,6 @@ static int active_load_balance_cpu_stop(void *data)
 	struct task_struct *p = NULL;
 	struct rq_flags rf;
 
-	rq_lock_irq(busiest_rq, &rf);
 	/*
 	 * Between queueing the stop-work and running it is a hole in which
 	 * CPUs can become inactive. We should not move tasks from or to
@@ -9933,8 +9930,7 @@ static int active_load_balance_cpu_stop(void *data)
 		goto out_unlock;
 
 	/* Make sure the requested CPU hasn't gone down in the meantime: */
-	if (unlikely(busiest_cpu != smp_processor_id() ||
-		     !busiest_rq->active_balance))
+	if (unlikely(!busiest_rq->active_balance))
 		goto out_unlock;
 
 	/* Is there any task to move? */
@@ -9981,13 +9977,10 @@ static int active_load_balance_cpu_stop(void *data)
 	rcu_read_unlock();
 out_unlock:
 	busiest_rq->active_balance = 0;
-	rq_unlock(busiest_rq, &rf);
 
 	if (p)
 		attach_one_task(target_rq, p);
 
-	local_irq_enable();
-
 	return 0;
 }
 
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ