[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e1977663-a9b5-4a01-a1e1-0cad2ffe13db@amd.com>
Date: Fri, 23 Jan 2026 08:40:37 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Qiliang Yuan <realwujing@...il.com>, <juri.lelli@...hat.com>,
<peterz@...radead.org>, <vincent.guittot@...aro.org>
CC: <bsegall@...gle.com>, <dietmar.eggemann@....com>,
<linux-kernel@...r.kernel.org>, <mgorman@...e.de>, <mingo@...hat.com>,
<rostedt@...dmis.org>, <vschneid@...hat.com>, Qiliang Yuan
<yuanql9@...natelecom.cn>
Subject: Re: [PATCH v2] sched/fair: Cache NUMA node statistics to avoid O(N)
scanning
Hello Qiliang,
On 1/23/2026 7:09 AM, Qiliang Yuan wrote:
> Optimize update_numa_stats() by leveraging pre-calculated group
> statistics from the load balancer hierarchy. This reduces the complexity
> of NUMA balancing overhead from O(CPUs_per_node) to O(1) in the hot path
Is it a hot-path? How much of a difference does this make? Some
benchmark numbers to support this would be good.
> when stats are fresh.
>
> Signed-off-by: Qiliang Yuan <realwujing@...il.com>
> Signed-off-by: Qiliang Yuan <yuanql9@...natelecom.cn>
> ---
> kernel/sched/fair.c | 35 +++++++++++++++++++++++++++++++++++
> kernel/sched/sched.h | 7 +++++++
> 2 files changed, 42 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e71302282671..dc46262bd227 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2099,11 +2099,36 @@ static void update_numa_stats(struct task_numa_env *env,
> bool find_idle)
> {
> int cpu, idle_core = -1;
> + struct sched_domain *sd;
> + struct sched_group *sg;
>
> memset(ns, 0, sizeof(*ns));
> ns->idle_cpu = -1;
>
> rcu_read_lock();
> + /* Algorithmic Optimization: Avoid O(N) scan by using cached stats from load balancer */
> + sd = rcu_dereference(per_cpu(sd_numa, env->src_cpu));
> + if (sd && !find_idle) {
> + sg = sd->groups;
The first group is always the local group and should contain the CPU you
are are looking at. No need for the do-while.
> + do {
> + /* Check if this group corresponds to the node we are interested in */
> + if (cpumask_test_cpu(cpumask_first(cpumask_of_node(nid)), sched_group_span(sg))) {
How often is this true? How much benefit are you seeing from this?
> + /* Use cached stats if they are recent enough (e.g. within 10ms) */
> + if (time_before(jiffies, sg->sgc->stats_update + msecs_to_jiffies(10))) {
> + ns->load = sg->sgc->load;
> + ns->runnable = sg->sgc->runnable;
> + ns->util = sg->sgc->util;
> + ns->nr_running = sg->sgc->nr_running;
> + ns->compute_capacity = sg->sgc->capacity;
Nothing protects a parallel updates to these variables from say a
newidle balance and you can see some inconsistent state here.
> + rcu_read_unlock();
> + goto skip_scan;
> + }
> + break;
> + }
> + sg = sg->next;
> + } while (sg != sd->groups);
> + }
> +
> for_each_cpu(cpu, cpumask_of_node(nid)) {
> struct rq *rq = cpu_rq(cpu);
>
> @@ -2126,6 +2151,7 @@ static void update_numa_stats(struct task_numa_env *env,
> }
> rcu_read_unlock();
>
> +skip_scan:
You can move that label before the unlock and save on that unlock before
jump.
> ns->weight = cpumask_weight(cpumask_of_node(nid));
>
> ns->node_type = numa_classify(env->imbalance_pct, ns);
> @@ -10488,6 +10514,15 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> if (sgs->group_type == group_overloaded)
> sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
> sgs->group_capacity;
> +
> + /* Algorithmic Optimization: Cache group stats for O(1) NUMA lookups */
> + if (env->sd->flags & SD_NUMA) {
> + group->sgc->nr_running = sgs->sum_h_nr_running;
> + group->sgc->load = sgs->group_load;
> + group->sgc->util = sgs->group_util;
> + group->sgc->runnable = sgs->group_runnable;
> + WRITE_ONCE(group->sgc->stats_update, jiffies);
Again, nothing protects concurrent updates from newidle context. Is it
okay to see some intermediate state at update_numa_stats()?
> + }
> }
>
> /**
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index d30cca6870f5..81160790993e 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2105,6 +2105,13 @@ struct sched_group_capacity {
>
> int id;
>
> + /* O(1) NUMA stats cache */
> + unsigned long nr_running;
> + unsigned long load;
> + unsigned long util;
> + unsigned long runnable;
> + unsigned long stats_update;
> +
40 more bytes that'll only be used by the groups of one SD_NUMA
domain. I believe there should be a better way to do this than
burdening everyone.
> unsigned long cpumask[]; /* Balance mask */
> };
>
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists