lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250330190221.4b75c7de@pumpkin>
Date: Sun, 30 Mar 2025 19:02:21 +0100
From: David Laight <david.laight.linux@...il.com>
To: Uros Bizjak <ubizjak@...il.com>
Cc: Ingo Molnar <mingo@...nel.org>, x86@...nel.org,
 linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
 Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH -tip 2/2] x86/hweight: Use POPCNT when available with
 X86_NATIVE_CPU option

On Sun, 30 Mar 2025 09:49:43 +0200
Uros Bizjak <ubizjak@...il.com> wrote:

> On Sat, Mar 29, 2025 at 12:00 PM David Laight
> <david.laight.linux@...il.com> wrote:
> >
> > On Sat, 29 Mar 2025 10:19:37 +0100
> > Uros Bizjak <ubizjak@...il.com> wrote:
> >  
> > > On Tue, Mar 25, 2025 at 10:56 PM Ingo Molnar <mingo@...nel.org> wrote:  
> > > >
> > > >
> > > > * Uros Bizjak <ubizjak@...il.com> wrote:
> > > >  
> > > > > Emit naked POPCNT instruction when available with X86_NATIVE_CPU
> > > > > option. The compiler is not bound by ABI when emitting the instruction
> > > > > without the fallback call to __sw_hweight{32,64}() library function
> > > > > and has much more freedom to allocate input and output operands,
> > > > > including memory input operand.
> > > > >
> > > > > The code size of x86_64 defconfig (with X86_NATIVE_CPU option)
> > > > > shrinks by 599 bytes:
> > > > >
> > > > >   add/remove: 0/0 grow/shrink: 45/197 up/down: 843/-1442 (-599)
> > > > >   Total: Before=22710531, After=22709932, chg -0.00%
> > > > >
> > > > > The asm changes from e.g.:
> > > > >
> > > > >          3bf9c:       48 8b 3d 00 00 00 00    mov    0x0(%rip),%rdi
> > > > >          3bfa3:       e8 00 00 00 00          call   3bfa8 <...>
> > > > >          3bfa8:       90                      nop
> > > > >          3bfa9:       90                      nop
> > > > >
> > > > > with:
> > > > >
> > > > >            34b:       31 c0                   xor    %eax,%eax
> > > > >            34d:       f3 48 0f b8 c7          popcnt %rdi,%rax
> > > > >
> > > > > in the .altinstr_replacement section
> > > > >
> > > > > to:
> > > > >
> > > > >          3bfdc:       31 c0                   xor    %eax,%eax
> > > > >          3bfde:       f3 48 0f b8 05 00 00    popcnt 0x0(%rip),%rax
> > > > >          3bfe5:       00 00
> > > > >
> > > > > where there is no need for an entry in the .altinstr_replacement
> > > > > section, shrinking all text sections by 9476 bytes:
> > > > >
> > > > >           text           data     bss      dec            hex filename
> > > > >       27267068        4643047  814852 32724967        1f357e7 vmlinux-old.o
> > > > >       27257592        4643047  814852 32715491        1f332e3 vmlinux-new.o  
> > > >  
> > > > > +#ifdef __POPCNT__
> > > > > +     asm_inline (ASM_FORCE_CLR "popcntl %[val], %[cnt]"
> > > > > +                 : [cnt] "=&r" (res)
> > > > > +                 : [val] ASM_INPUT_RM (w));
> > > > > +#else
> > > > >       asm_inline (ALTERNATIVE(ANNOTATE_IGNORE_ALTERNATIVE
> > > > >                               "call __sw_hweight32",
> > > > >                               ASM_CLR "popcntl %[val], %[cnt]",
> > > > >                               X86_FEATURE_POPCNT)
> > > > >                        : [cnt] "=a" (res), ASM_CALL_CONSTRAINT
> > > > >                        : [val] REG_IN (w));  
> > > >
> > > > So a better optimization I think would be to declare and implement
> > > > __sw_hweight32 with a different, less intrusive function call ABI that  
> > >
> > > With an external function, the ABI specifies the location of input
> > > argument and function result. Unless we want to declare the whole
> > > function as asm() inline function (with some 20 instructions), we have
> > > to specify the location of function arguments and where the function
> > > result is to be found in the asm() that calls the external function.
> > > Register allocator then uses this information to move arguments to the
> > > right place before the call.
> > >
> > > The above approach, when used to emulate an insn,  has a drawback.
> > > When the instruction is available as an alternative, it still has
> > > fixed input and output registers, forced by the ABI of the function
> > > call. Register allocator has to move registers unnecessarily to
> > > satisfy the constraints of the function call, not the instruction
> > > itself.  
> >
> > Forcing the argument into a fixed register won't make much difference
> > to execution time.
> > Just a bit more work for the instruction decoder and a few more bytes
> > of I-cache.
> > (Register-register moves can be zero clocks.)
> > In many cases (but not as many as you might hope for) the compiler
> > back-tracks the input register requirement to the instruction that
> > generates the value.  
> 
> I'm afraid I don't fully understand what you mean by "back-tracking
> the input register requirement".

If the asm block requires an input in %rdx then the instruction that
creates the value would be expected to put it into %rdx ready for the
asm block.
Even if it doesn't a register-register move is often implemented without
using the ALU by 'register renaming' (there is an indirection between
the register 'number' the code uses and the physical latches that hold
the value, multiple copies of (say) %rax can be live at the same time).

> However, with:
> 
> asm("insn %0, %1" : "=r" (out) : "r" (in));
> 
> the compiler is not obliged to match input with output, although many
> times it does so (especially when input argument is dead). To avoid
> false dependence on the output, we should force the compiler to always
> match input and output:
> 
> asm("insn %0, %1" : "=r" (out) : "0" (in));

I'd expect the compiler to generate better code if it is allowed to
use separate registers for the input and output.
It may be able to use the input value again.
There is no 'dependency' on the output register (unless the instruction
only updates the low bits).

> 
> and this will resolve false dependence (input register obviously needs
> to be ready before insn) at the expense of an extra move instruction
> in front of the insn in case input is not dead. This is unfortunately
> not possible when one of the alternatives is a function call, where
> location of input and output arguments is specified by ABI.
> 
> > In this case the called function needs two writeable registers.
> > I think you can tell gcc the input is invalidated and the output
> > is 'early clobber' so that the register are different.  
> 
> Yes, my first patch used this approach, where output operand is cleared first:

That clearing of the output serves an entirely different purpose.
> 
> asm("xorl %0, %0; popcntl %1, %0" : "=&r" (out) : "rm" (in));
> 
> Please note that "earlyclobbered" output reg can't be matched with
> input reg, or with any reg that forms the address.

But you want them to be different.
The called function needs to multiple 'shift and add' sequences.
To do that it needs a scratch register.
So if the asm block requires the input in %rdx, puts the output in %rax
and destroys %rdx you can write a function that doesn't need any other
registers.
If you try really hard you can make the called function depend on the
register the compiler selects - and white a different copy for each.
Not worth it here.

	David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ