[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab5f9e1f-cdec-4993-822f-d9b64144ad7c@linux.ibm.com>
Date: Fri, 4 Jul 2025 01:09:50 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
K Prateek Nayak <kprateek.nayak@....com>
Cc: Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
Tim Chen <tim.c.chen@...el.com>,
Vincent Guittot
<vincent.guittot@...aro.org>,
Libo Chen <libo.chen@...cle.com>, Abel Wu <wuyun.abel@...edance.com>,
Madadi Vineeth Reddy <vineethr@...ux.ibm.com>,
Hillf Danton <hdanton@...a.com>, Len Brown <len.brown@...el.com>,
linux-kernel@...r.kernel.org, Chen Yu <yu.c.chen@...el.com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>
Subject: Re: [RFC patch v3 04/20] sched: Avoid calculating the cpumask if the
system is overloaded
On 6/18/25 23:57, Tim Chen wrote:
> From: K Prateek Nayak <kprateek.nayak@....com>
>
> If the SIS_UTIL cuts off idle cpu search, result of the cpumask_and() is
> of no use. Since select_idle_cpu() can now be called twice per wake up
> in the select_idle_sibling() due to cache aware wake up, this overhead
> can be visible in benchmarks like hackbench.
>
> To save some additional cycles, especially in cases where we target
> the LLC frequently and the search bails out because the LLC is busy,
> only calculate the cpumask if the system is not overloaded.
>
This patch could be independent and should help in general.
But changelog needs to be updated.
> Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
> ---
> kernel/sched/fair.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 567ad2a0cfa2..6a2678f9d44a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7918,8 +7918,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> int i, cpu, idle_cpu = -1, nr = INT_MAX;
> struct sched_domain_shared *sd_share;
>
> - cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> -
> if (sched_feat(SIS_UTIL)) {
> sd_share = rcu_dereference(per_cpu(sd_llc_shared, target));
> if (sd_share) {
> @@ -7931,6 +7929,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> }
> }
>
> + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +
> if (static_branch_unlikely(&sched_cluster_active)) {
> struct sched_group *sg = sd->groups;
>
Powered by blists - more mailing lists