[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251015122125.GU3289052@noisy.programming.kicks-ass.net>
Date: Wed, 15 Oct 2025 14:21:25 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Ingo Molnar <mingo@...hat.com>,
K Prateek Nayak <kprateek.nayak@....com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Madadi Vineeth Reddy <vineethr@...ux.ibm.com>,
Hillf Danton <hdanton@...a.com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>,
Jianyong Wu <jianyong.wu@...look.com>,
Yangyu Chen <cyy@...self.name>,
Tingyin Duan <tingyin.duan@...il.com>,
Vern Hao <vernhao@...cent.com>, Len Brown <len.brown@...el.com>,
Aubrey Li <aubrey.li@...el.com>, Zhao Liu <zhao1.liu@...el.com>,
Chen Yu <yu.chen.surf@...il.com>, Chen Yu <yu.c.chen@...el.com>,
Libo Chen <libo.chen@...cle.com>,
Adam Li <adamli@...amperecomputing.com>,
Tim Chen <tim.c.chen@...el.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 08/19] sched/fair: Introduce per runqueue task LLC
preference counter
On Sat, Oct 11, 2025 at 11:24:45AM -0700, Tim Chen wrote:
> Each runqueue is assigned a static array where each element tracks
> the number of tasks preferring a given LLC, indexed from 0 to
> NR_LLCS.
>
> For example, rq->nr_pref_llc[3] = 2 signifies that there are 2 tasks on
> this runqueue which prefer to run within LLC3 (indexed from 0 to NR_LLCS
>
> The load balancer can use this information to identify busy runqueues
> and migrate tasks to their preferred LLC domains.
>
> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> ---
> kernel/sched/fair.c | 35 +++++++++++++++++++++++++++++++++++
> kernel/sched/sched.h | 1 +
> 2 files changed, 36 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fd315937c0cf..b7a68fe7601b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1235,22 +1235,51 @@ static inline int llc_idx(int cpu)
> return per_cpu(sd_llc_idx, cpu);
> }
>
> +static inline int pref_llc_idx(struct task_struct *p)
> +{
> + return llc_idx(p->preferred_llc);
> +}
> +
> static void account_llc_enqueue(struct rq *rq, struct task_struct *p)
> {
> + int pref_llc;
> +
> if (!sched_cache_enabled())
> return;
>
> rq->nr_llc_running += (p->preferred_llc != -1);
> rq->nr_pref_llc_running += (p->preferred_llc == task_llc(p));
> +
> + if (p->preferred_llc < 0)
> + return;
> +
> + pref_llc = pref_llc_idx(p);
> + if (pref_llc < 0)
> + return;
> +
> + ++rq->nr_pref_llc[pref_llc];
> }
>
> static void account_llc_dequeue(struct rq *rq, struct task_struct *p)
> {
> + int pref_llc;
> +
> if (!sched_cache_enabled())
> return;
>
> rq->nr_llc_running -= (p->preferred_llc != -1);
> rq->nr_pref_llc_running -= (p->preferred_llc == task_llc(p));
> +
> + if (p->preferred_llc < 0)
> + return;
> +
> + pref_llc = pref_llc_idx(p);
> + if (pref_llc < 0)
> + return;
> +
> + /* avoid negative counter */
> + if (rq->nr_pref_llc[pref_llc] > 0)
> + --rq->nr_pref_llc[pref_llc];
How!? Also, please use post increment/decrement operators.
> }
>
> void mm_init_sched(struct mm_struct *mm, struct mm_sched __percpu *_pcpu_sched)
> @@ -1524,10 +1553,16 @@ void init_sched_mm(struct task_struct *p)
>
> void reset_llc_stats(struct rq *rq)
> {
> + int i = 0;
> +
> if (!sched_cache_enabled())
> return;
>
> rq->nr_llc_running = 0;
> +
> + for (i = 0; i < max_llcs; ++i)
> + rq->nr_pref_llc[i] = 0;
> +
> rq->nr_pref_llc_running = 0;
> }
Still don't understand why this thing exists..
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 3ab64067acc6..b801d32d5fba 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1101,6 +1101,7 @@ struct rq {
> #ifdef CONFIG_SCHED_CACHE
> unsigned int nr_pref_llc_running;
> unsigned int nr_llc_running;
> + unsigned int nr_pref_llc[NR_LLCS];
Gah, yeah, lets not do this. Just (re)alloc the thing on topology
changes or something.
Powered by blists - more mailing lists