[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250929140920.GN3419281@noisy.programming.kicks-ass.net>
Date: Mon, 29 Sep 2025 16:09:20 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Chen Yu <yu.c.chen@...el.com>
Cc: Ingo Molnar <mingo@...hat.com>,
K Prateek Nayak <kprateek.nayak@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Libo Chen <libo.chen@...cle.com>,
Madadi Vineeth Reddy <vineethr@...ux.ibm.com>,
Hillf Danton <hdanton@...a.com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>,
Jianyong Wu <jianyong.wu@...look.com>,
Yangyu Chen <cyy@...self.name>,
Tingyin Duan <tingyin.duan@...il.com>,
Vern Hao <vernhao@...cent.com>, Len Brown <len.brown@...el.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Aubrey Li <aubrey.li@...el.com>, Zhao Liu <zhao1.liu@...el.com>,
Chen Yu <yu.chen.surf@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v4 06/28] sched: Save the per LLC utilization for
better cache aware scheduling
On Sat, Aug 09, 2025 at 01:02:54PM +0800, Chen Yu wrote:
> +#ifdef CONFIG_SCHED_CACHE
> +/*
> + * Save this sched group's statistic for later use:
> + * The task wakeup and load balance can make better
> + * decision based on these statistics.
> + */
> +static void update_sg_if_llc(struct lb_env *env, struct sg_lb_stats *sgs,
> + struct sched_group *group)
> +{
> + /* Find the sched domain that spans this group. */
> + struct sched_domain *sd = env->sd->child;
> + struct sched_domain_shared *sd_share;
> +
> + if (!sched_feat(SCHED_CACHE) || env->idle == CPU_NEWLY_IDLE)
> + return;
> +
> + /* only care the sched domain that spans 1 LLC */
> + if (!sd || !(sd->flags & SD_SHARE_LLC) ||
> + !sd->parent || (sd->parent->flags & SD_SHARE_LLC))
> + return;
Did you want to write:
if (sd != per_cpu(sd_llc))
return;
Or something?
> + sd_share = rcu_dereference(per_cpu(sd_llc_shared,
> + cpumask_first(sched_group_span(group))));
> + if (!sd_share)
> + return;
> +
> + if (likely(READ_ONCE(sd_share->util_avg) != sgs->group_util))
> + WRITE_ONCE(sd_share->util_avg, sgs->group_util);
If you expect it to be different, does that whole load and compare still
matter?
> +}
Powered by blists - more mailing lists