lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4bCnnL-CBFwgAQtN9S+sUE_wikda6E+8k9632J9b62dCg@mail.gmail.com>
Date: Sat, 29 Mar 2025 10:19:37 +0100
From: Uros Bizjak <ubizjak@...il.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, 
	Thomas Gleixner <tglx@...utronix.de>, Borislav Petkov <bp@...en8.de>, 
	Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH -tip 2/2] x86/hweight: Use POPCNT when available with
 X86_NATIVE_CPU option

On Tue, Mar 25, 2025 at 10:56 PM Ingo Molnar <mingo@...nel.org> wrote:
>
>
> * Uros Bizjak <ubizjak@...il.com> wrote:
>
> > Emit naked POPCNT instruction when available with X86_NATIVE_CPU
> > option. The compiler is not bound by ABI when emitting the instruction
> > without the fallback call to __sw_hweight{32,64}() library function
> > and has much more freedom to allocate input and output operands,
> > including memory input operand.
> >
> > The code size of x86_64 defconfig (with X86_NATIVE_CPU option)
> > shrinks by 599 bytes:
> >
> >   add/remove: 0/0 grow/shrink: 45/197 up/down: 843/-1442 (-599)
> >   Total: Before=22710531, After=22709932, chg -0.00%
> >
> > The asm changes from e.g.:
> >
> >          3bf9c:       48 8b 3d 00 00 00 00    mov    0x0(%rip),%rdi
> >          3bfa3:       e8 00 00 00 00          call   3bfa8 <...>
> >          3bfa8:       90                      nop
> >          3bfa9:       90                      nop
> >
> > with:
> >
> >            34b:       31 c0                   xor    %eax,%eax
> >            34d:       f3 48 0f b8 c7          popcnt %rdi,%rax
> >
> > in the .altinstr_replacement section
> >
> > to:
> >
> >          3bfdc:       31 c0                   xor    %eax,%eax
> >          3bfde:       f3 48 0f b8 05 00 00    popcnt 0x0(%rip),%rax
> >          3bfe5:       00 00
> >
> > where there is no need for an entry in the .altinstr_replacement
> > section, shrinking all text sections by 9476 bytes:
> >
> >           text           data     bss      dec            hex filename
> >       27267068        4643047  814852 32724967        1f357e7 vmlinux-old.o
> >       27257592        4643047  814852 32715491        1f332e3 vmlinux-new.o
>
> > +#ifdef __POPCNT__
> > +     asm_inline (ASM_FORCE_CLR "popcntl %[val], %[cnt]"
> > +                 : [cnt] "=&r" (res)
> > +                 : [val] ASM_INPUT_RM (w));
> > +#else
> >       asm_inline (ALTERNATIVE(ANNOTATE_IGNORE_ALTERNATIVE
> >                               "call __sw_hweight32",
> >                               ASM_CLR "popcntl %[val], %[cnt]",
> >                               X86_FEATURE_POPCNT)
> >                        : [cnt] "=a" (res), ASM_CALL_CONSTRAINT
> >                        : [val] REG_IN (w));
>
> So a better optimization I think would be to declare and implement
> __sw_hweight32 with a different, less intrusive function call ABI that

With an external function, the ABI specifies the location of input
argument and function result. Unless we want to declare the whole
function as asm() inline function (with some 20 instructions), we have
to specify the location of function arguments and where the function
result is to be found in the asm() that calls the external function.
Register allocator then uses this information to move arguments to the
right place before the call.

The above approach, when used to emulate an insn,  has a drawback.
When the instruction is available as an alternative, it still has
fixed input and output registers, forced by the ABI of the function
call. Register allocator has to move registers unnecessarily to
satisfy the constraints of the function call, not the instruction
itself.

The proposed solution builds on the fact that with -march=native (and
also when -mpopcnt is specified on the command line) , the compiler
signals the availability of certain ISA by defining the corresponding
definition. We can use this definition to relax the constraints to fit
the instruction, not the ABI of the fallback function call. On x86, we
can also access memory directly, avoiding clobbering a temporary input
register.

Without the fix for (obsolete) false dependency, the change becomes simply:

#ifdef __POPCNT__
     asm ("popcntl %[val], %[cnt]"
                 : [cnt] "=r" (res)
                 : [val] ASM_INPUT_RM (w));
#else

and besides the reported savings of 600 bytes in the .text section
also allows the register allocator to schedule registers (and input
arguments from memory) more optimally, not counting additional 9k
saved space in the alternative section.

The patch is also an example, how -march=native enables further
optimizations involving additional ISAs.

Thanks,
Uros.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ