lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z061gVVaFwRwd-U0@gpd3>
Date: Tue, 3 Dec 2024 08:38:41 +0100
From: Andrea Righi <arighi@...dia.com>
To: Yury Norov <yury.norov@...il.com>
Cc: Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] sched_ext: Introduce per-NUMA idle cpumasks

On Sat, Nov 30, 2024 at 04:24:36PM +0100, Andrea Righi wrote:
> On Fri, Nov 29, 2024 at 11:38:53AM -0800, Yury Norov wrote:
> ...
> > >  static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
> > >  {
> > > -     int cpu;
> > > +     int start = cpu_to_node(smp_processor_id());
> > > +     int node, cpu;
> > >
> > >  retry:
> > >       if (sched_smt_active()) {
> > > -             cpu = cpumask_any_and_distribute(idle_masks.smt, cpus_allowed);
> > > -             if (cpu < nr_cpu_ids)
> > > -                     goto found;
> > > +             for_each_node_state_wrap(node, N_ONLINE, start) {
> > > +                     if (!cpumask_intersects(idle_masks[node]->smt, cpus_allowed))
> > > +                             continue;
> > > +                     cpu = cpumask_any_and_distribute(idle_masks[node]->smt, cpus_allowed);
> > > +                     if (cpu < nr_cpu_ids)
> > > +                             goto found;
> > > +             }
> > 
> > Here the same consideration is applicable as for v1:
> > if idle_masks[node]->smt and cpus_allowed are disjoint, the
> > cpumask_any_and_distribute() will return >= nr_cpu_ids, and we'll go to
> > the next iteration. No need to call cpumask_intersects().
> 
> For some reason, removing cpumask_intersects() here seems to introduce a
> slight performance drop.
> 
> My initial assumption was that the performance drop occurs because
> cpus_allowed often doesn't intersect with idle_masks[node]->smt (since
> cpus_allowed usually doesn't span multiple NUMA nodes), so running
> cpumask_any_and_distribute() on N cpumasks (worst case) is slower than
> first checking for an intersection.
> 
> However, I will rerun the test to ensure that the regression is
> consistent and not just a measurement error.

I did more testing and the slight performance drop is not consistent,
therefore, I believe we can attribute it to measurement errors.

I'll send a v3 that removes cpumask_intersects() and includes some minor
code refactoring.

-Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ