lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 Apr 2024 08:11:52 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Linus Torvalds' <torvalds@...ux-foundation.org>, Peter Zijlstra
	<peterz@...radead.org>
CC: Ingo Molnar <mingo@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
	Peter Anvin <hpa@...or.com>, the arch/x86 maintainers <x86@...nel.org>,
	"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: RE: More annoying code generation by clang

From: Linus Torvalds
> Sent: 08 April 2024 20:43
...
> I think it's mainly some of the bitop code that people have noticed
> before - fls and variable_ffs() and friends.
> 
> I suspect clang is more common in the arm64 world than it is for
> x86-64 kernel developers, and arm64 inline asm basically never uses
> "rm" or "g" since arm64 doesn't have instructions that take either a
> register or a memory operand.
> 
> Anyway, with gcc this generates
> 
>         cmp (%rdx),%ebx; sbb %rax,%rax  # _7->max_fds, fd, __mask
> 
> IOW, it uses the memory location for "max_fds". It couldn't do that
> before, because it used to think that it always had to do the compare
> in 64 bits, and the memory location is only 32-bit.
> 
> With clang, this generates
> 
>         movl    (%rcx), %eax
>         cmpl    %eax, %edi
>         sbbq    %rdi, %rdi
> 
> which has that extra register use, but is at least much better than
> what it used to generate with crazy "load into register, spill to
> stack, then compare against stack contents".

Provided the compiler can find a register I doubt the extra
instruction makes much difference.
The 'cmp (%rdx),%ebx)' ends up being 2 u-ops the same as
the movl/cmpl pair.
Instruction decode and retirement aren't often bottlenecks on recent cpu.
So I suspect the main difference is cache footprint.

Trying to measure the difference is probably impossible...

You'll probably get a bigger difference by changing a lot of
function results and parameters to 'unsigned long' to remove
all the zero-extending that happens.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ