[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z0suMHchW-KyIGyy@gpd3>
Date: Sat, 30 Nov 2024 16:24:32 +0100
From: Andrea Righi <arighi@...dia.com>
To: Yury Norov <yury.norov@...il.com>
Cc: Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] sched_ext: Introduce per-NUMA idle cpumasks
On Fri, Nov 29, 2024 at 11:38:53AM -0800, Yury Norov wrote:
...
> > static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
> > {
> > - int cpu;
> > + int start = cpu_to_node(smp_processor_id());
> > + int node, cpu;
> >
> > retry:
> > if (sched_smt_active()) {
> > - cpu = cpumask_any_and_distribute(idle_masks.smt, cpus_allowed);
> > - if (cpu < nr_cpu_ids)
> > - goto found;
> > + for_each_node_state_wrap(node, N_ONLINE, start) {
> > + if (!cpumask_intersects(idle_masks[node]->smt, cpus_allowed))
> > + continue;
> > + cpu = cpumask_any_and_distribute(idle_masks[node]->smt, cpus_allowed);
> > + if (cpu < nr_cpu_ids)
> > + goto found;
> > + }
>
> Here the same consideration is applicable as for v1:
> if idle_masks[node]->smt and cpus_allowed are disjoint, the
> cpumask_any_and_distribute() will return >= nr_cpu_ids, and we'll go to
> the next iteration. No need to call cpumask_intersects().
For some reason, removing cpumask_intersects() here seems to introduce a
slight performance drop.
My initial assumption was that the performance drop occurs because
cpus_allowed often doesn't intersect with idle_masks[node]->smt (since
cpus_allowed usually doesn't span multiple NUMA nodes), so running
cpumask_any_and_distribute() on N cpumasks (worst case) is slower than
first checking for an intersection.
However, I will rerun the test to ensure that the regression is
consistent and not just a measurement error.
-Andrea
Powered by blists - more mailing lists