lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Za3V1pFXM+4UnoIM@fedora>
Date: Mon, 22 Jan 2024 10:41:26 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Yury Norov <yury.norov@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
	Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
	Breno Leitao <leitao@...ian.org>,
	Nathan Chancellor <nathan@...nel.org>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	Zi Yan <ziy@...dia.com>, ming.lei@...hat.com
Subject: Re: [PATCH 1/9] cpumask: introduce for_each_cpu_and_from()

On Sun, Jan 21, 2024 at 11:50:02AM -0800, Yury Norov wrote:
> On Sat, Jan 20, 2024 at 11:03:37AM +0800, Ming Lei wrote:
> > On Fri, Jan 19, 2024 at 06:50:45PM -0800, Yury Norov wrote:
> > > Similarly to for_each_cpu_and(), introduce a for_each_cpu_and_from(),
> > > which is handy when it's needed to traverse 2 cpumasks or bitmaps,
> > > starting from a given position.
> > 
> > The new helper is useless, see
> > 
> > https://lore.kernel.org/lkml/ZZNgDb6bzOscrNmk@fedora/
> 
> Let's consider the following configuration.
> Step-by-step:

..

> 
> #	loop	cpu	match	siblmsk	nmsk	irqmsk
>  0	outer	0	yes 		1110	0001
>  1	inner	1	yes 	0011	1100	0011
>  2	inner	2	no 	0011	1100	0011
>  3	inner	3	no 	0011	1100	0011
>  4	outer	2	yes	 	1000	0111
>  5	inner	3	yes	1100	0000	1111
> 
> Your code works worse because it's a Schlemiel the Painter's algorithm.
> I mentioned it twice in the commit messages and at least 3 times in
> replies to your comments.

Does it really matter here in reality? Which kind of user visible improvements
can be observed?

I have mentioned several times, for control/management code path, we care
more on maintainability, correctness instead of efficiency.

You are _wasting_ resources in wrong place, if you are really interested in
optimization, please do in fast code path, such as, related and not not limited,
irq handling, io handling, memory allocation, ....

Unfortunately, your V5 still have obvious bug, and as you mentioned,
the patchset title is wrong too.

> 
> Here I'll stop and will not reply to your emails, including the rest of
> that Friday's night mailbombing, unless you at least admit you're wrong
> in this case and for_each_cpu_and_from() is useful here. 

It is easy to get same result without adding for_each_cpu_and_from(), see the
patch I sent:

https://lore.kernel.org/lkml/20240120065543.739203-1-ming.lei@redhat.com/

in which we needn't to update iterator variable inside loop, and fix
the bug in your patch 4 of v5, and it is still O(N). Meantime it is
simpler and easier to get proved.

Here your use of for_each_cpu_and_from() is tricky too, cause the
loop condition variable(part of iterator variable) of cpu mask is being updated
inside the loop. And we can get same result by cpumask_next_and()
without playing the trick.

> 
> I'd also recommend you to learn more about atomic operations basics and
> revoke your NAK from the patch #3.

If you think my comment on the NAK is wrong, please reply on the comment
directly.

> 
> Thanks,
> 	Yury
> 
> PS: There's a typo in the series name, I meant that the series makes the
> function O(N) of course. But even that is overly optimistic. It's O(N*S),
> where S is the number of sibling groups. A couple more patches needed to
> make it a true O(N). Still, much better.

Either O(1) or O(N) isn't one big deal here, cause it is oneshot
slow code path, and nr_cpu_ids is not big enough in reality.

Even you can't make real O(N) because your patch 4 has logic
mistake, see my comment:

https://lore.kernel.org/lkml/ZatlggW%2F8SH6od9O@fedora/



Thanks,
Ming


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ