lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20251127074612.147150-1-adamli@os.amperecomputing.com>
Date: Thu, 27 Nov 2025 07:46:12 +0000
From: Adam Li <adamli@...amperecomputing.com>
To: mingo@...hat.com,
	peterz@...radead.org,
	juri.lelli@...hat.com,
	vincent.guittot@...aro.org
Cc: dietmar.eggemann@....com,
	rostedt@...dmis.org,
	bsegall@...gle.com,
	mgorman@...e.de,
	vschneid@...hat.com,
	cl@...ux.com,
	linux-kernel@...r.kernel.org,
	patches@...erecomputing.com,
	shkaushik@...erecomputing.com,
	Adam Li <adamli@...amperecomputing.com>
Subject: [RFC PATCH] Remove redundant avg_idle check from sched_balance_newidle

In sched_balance_newidle(), rq->avg_idle is checked against
sd->max_newidle_lb_cost in two places. However these two conditional
checks are logically duplicated:

sched_balance_newidle()
{
	u64 curr_cost = 0;
[...]
	(!get_rd_overloaded(this_rq->rd) ||
		(sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) [1] {
		[...]
		goto out;
	}
	[...]
	sched_balance_update_blocked_averages(this_cpu);
	for_each_domain(this_cpu, sd) {
	[...]
		if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost) [2]
			break;
	[...]
	}
[...]
out:
[...]
}

In the first for_each_domain() loop curr_cost is 0, so conditional check
[1] and [2] are same.

This patch removed conditional check [1]. After the patch, more cpu cycles
might be spent in sched_balance_update_blocked_averages() if [1] is true.
However benchmark shows the patch does not change performance.

Tested with schbench and Specjbb on AmpereOne CPU. The schbench command
is:
./schbench -L -m 4 -M auto -t 256 -n 0 -r 0 -s 0

Signed-off-by: Adam Li <adamli@...amperecomputing.com>
---
 kernel/sched/fair.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5b752324270b..bbbe48ae6614 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12825,8 +12825,7 @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
 	rcu_read_lock();
 	sd = rcu_dereference_check_sched_domain(this_rq->sd);
 
-	if (!get_rd_overloaded(this_rq->rd) ||
-	    (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {
+	if (!get_rd_overloaded(this_rq->rd)) {
 
 		if (sd)
 			update_next_balance(sd, &next_balance);
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ