lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Za11asdkTrKzrL8e@yury-ThinkPad>
Date: Sun, 21 Jan 2024 11:50:02 -0800
From: Yury Norov <yury.norov@...il.com>
To: Ming Lei <ming.lei@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
	Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
	Breno Leitao <leitao@...ian.org>,
	Nathan Chancellor <nathan@...nel.org>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	Zi Yan <ziy@...dia.com>
Subject: Re: [PATCH 1/9] cpumask: introduce for_each_cpu_and_from()

On Sat, Jan 20, 2024 at 11:03:37AM +0800, Ming Lei wrote:
> On Fri, Jan 19, 2024 at 06:50:45PM -0800, Yury Norov wrote:
> > Similarly to for_each_cpu_and(), introduce a for_each_cpu_and_from(),
> > which is handy when it's needed to traverse 2 cpumasks or bitmaps,
> > starting from a given position.
> 
> The new helper is useless, see
> 
> https://lore.kernel.org/lkml/ZZNgDb6bzOscrNmk@fedora/

Let's consider the following configuration.

CPUs: 0b1111
Sibling groups: 0b0011 and 0b1100
nmsk: 0b1111

As the complexity measure we take the number of accesses to nmsk in
the outer loop, and to (nmsk & sibl) in the inner loop in search
routines, so that

	cpumask_first(1111)

requires 1 access to find the first set bit, and

	cpumask_first(1000)

requires 4 such accesses.

Actual find_bit() ops work better than this by using __ffs(), but on long
bitmaps the performance will be as described above.

Now, look at the code. This is yours:

  static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
                                  unsigned int cpus_per_grp)
  {
          const struct cpumask *siblmsk;
          int cpu, sibl;
  
          for ( ; cpus_per_grp > 0; ) {
                  cpu = cpumask_first(nmsk);
  
                  /* Should not happen, but I'm too lazy to think about it */
                  if (cpu >= nr_cpu_ids)
                          return;
  
                  cpumask_clear_cpu(cpu, nmsk);
                  cpumask_set_cpu(cpu, irqmsk);
                  cpus_per_grp--;
  
                  /* If the cpu has siblings, use them first */
                  siblmsk = topology_sibling_cpumask(cpu);
                  for (sibl = -1; cpus_per_grp > 0; ) {
                          sibl = cpumask_next(sibl, siblmsk);
                          if (sibl >= nr_cpu_ids)
                                  break;
                          if (!cpumask_test_and_clear_cpu(sibl, nmsk))
                                  continue;
                          cpumask_set_cpu(sibl, irqmsk);
                          cpus_per_grp--;
                  }
          }
  }

This is your code step-by-step:

#	loop	cpu	match	siblmsk	nmsk	irqmsk
 0	outer	0	yes 		1110	0001
 1	inner	0	no  		1110	0001
 2	inner	1	yes 	0011	1100	0011
 3	inner	2	no 		1100	0011
 4	inner	3	no 		1100	0011
 5	outer	0	no	 	1100	0011
 6	outer	1	no	 	1100	0011
 7	outer	2	yes	 	1000	0111
 8	inner	0	no 	1100	1000	0111
 9	inner	1	no 	1100	1000	0111
10	inner	2	no 	1100	1000	0111
11	inner	3	yes	1100	0000	1111
12	outer	0	no	 	0000	1111
13	outer	1	no	 	0000	1111
14	outer	2	no	 	0000	1111
15	outer	3	no	 	0000	1111

This is mine:

  static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
                                  unsigned int cpus_per_grp)
  {
          const struct cpumask *siblmsk;
          int cpu, sibl;
  
          for_each_cpu(cpu, nmsk) {
                  if (cpus_per_grp-- == 0)
                          return;
  
                  /*
                   * If a caller wants to spread IRQa on offline CPUs, we need to
                   * take care of it explicitly because those offline CPUS are not
                   * included in siblings cpumask.
                   */
                  __cpumask_clear_cpu(cpu, nmsk);
                  __cpumask_set_cpu(cpu, irqmsk);
  
                  /* If the cpu has siblings, use them first */
                  siblmsk = topology_sibling_cpumask(cpu);
                  sibl = cpu + 1;
  
                  for_each_cpu_and_from(sibl, siblmsk, nmsk) {
                          if (cpus_per_grp-- == 0)
                                  return;
  
                          __cpumask_clear_cpu(sibl, nmsk);
                          __cpumask_set_cpu(sibl, irqmsk);
                          cpu = sibl + 1;
                  }
          }
  }

Step-by-step:

#	loop	cpu	match	siblmsk	nmsk	irqmsk
 0	outer	0	yes 		1110	0001
 1	inner	1	yes 	0011	1100	0011
 2	inner	2	no 	0011	1100	0011
 3	inner	3	no 	0011	1100	0011
 4	outer	2	yes	 	1000	0111
 5	inner	3	yes	1100	0000	1111

Your code works worse because it's a Schlemiel the Painter's algorithm.
I mentioned it twice in the commit messages and at least 3 times in
replies to your comments.

Here I'll stop and will not reply to your emails, including the rest of
that Friday's night mailbombing, unless you at least admit you're wrong
in this case and for_each_cpu_and_from() is useful here. 

I'd also recommend you to learn more about atomic operations basics and
revoke your NAK from the patch #3.

Thanks,
	Yury

PS: There's a typo in the series name, I meant that the series makes the
function O(N) of course. But even that is overly optimistic. It's O(N*S),
where S is the number of sibling groups. A couple more patches needed to
make it a true O(N). Still, much better.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ