[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240206043921.850302-1-void@manifault.com>
Date: Mon, 5 Feb 2024 22:39:18 -0600
From: David Vernet <void@...ifault.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...hat.com,
peterz@...radead.org,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
bsegall@...gle.com,
bristot@...hat.com,
vschneid@...hat.com,
kernel-team@...a.com
Subject: [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()
update_sd_pick_busiest() (and its caller) has some room for small
optimizations, and some improvements in readability.
- In update_sd_lb_stats(), we're using a goto to skip a single if check.
Let's remove the goto and just add another condition to the if.
- In update_sd_pick_busiest(), only update a group_misfit_task group to
be the busiest if it has strictly more load than the current busiest
task, rather than >= the load.
- When comparing the current struct sched_group with the yet-busiest
domain in update_sd_pick_busiest(), if the two groups have the same
group type, we're currently doing a bit of unnecessary work for any
group >= group_misfit_task. We're comparing the two groups, and then
returning only if false (the group in question is not the busiest).
Othewise, we break, do an extra unnecessary conditional check that's
vacuously false for any group type > group_fully_busy, and then always
return true. This patch series has us instead simply return directly
in the switch statement, saving some bytes in load_balance().
---
v1: https://lore.kernel.org/lkml/20240202070216.2238392-1-void@manifault.com/
v2: https://lore.kernel.org/all/20240204044618.46100-1-void@manifault.com/
v2 -> v3:
- Add Vincent's Reviewed-by tags
- Fix stale commit summary sentence (Vincent)
v1 -> v2 changes:
- Split the patch series into separate patches (Valentin)
- Update the group_misfit_task busiest check to use strict inequality
David Vernet (3):
sched/fair: Remove unnecessary goto in update_sd_lb_stats()
sched/fair: Do strict inequality check for busiest misfit task group
sched/fair: Simplify some logic in update_sd_pick_busiest()
kernel/sched/fair.c | 19 ++++---------------
1 file changed, 4 insertions(+), 15 deletions(-)
--
2.43.0
Powered by blists - more mailing lists