lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 5 Feb 2016 08:28:12 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Denys Vlasenko <dvlasenk@...hat.com>
Cc:	Thomas Graf <tgraf@...g.ch>, Peter Zijlstra <peterz@...radead.org>,
	David Rientjes <rientjes@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] force inlining of some byteswap operations


* Denys Vlasenko <dvlasenk@...hat.com> wrote:

> Sometimes gcc mysteriously doesn't inline
> very small functions we expect to be inlined. See
>     https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122
> 
> With this .config:
> http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os,
> the following functions get deinlined many times.
> Examples of disassembly:
> 
> <get_unaligned_be16> (12 copies, 51 calls):
>        66 8b 07                mov    (%rdi),%ax
>        55                      push   %rbp
>        48 89 e5                mov    %rsp,%rbp
>        86 e0                   xchg   %ah,%al
>        5d                      pop    %rbp
>        c3                      retq
> 
> <get_unaligned_be32> (12 copies, 135 calls):
>        8b 07                   mov    (%rdi),%eax
>        55                      push   %rbp
>        48 89 e5                mov    %rsp,%rbp
>        0f c8                   bswap  %eax
>        5d                      pop    %rbp
>        c3                      retq
> 
> <get_unaligned_be64> (2 copies, 20 calls):
>        48 8b 07                mov    (%rdi),%rax
>        55                      push   %rbp
>        48 89 e5                mov    %rsp,%rbp
>        48 0f c8                bswap  %rax
>        5d                      pop    %rbp
>        c3                      retq
> 
> <__swab16p> (16 copies, 146 calls):
>        55                      push   %rbp
>        89 f8                   mov    %edi,%eax
>        86 e0                   xchg   %ah,%al
>        48 89 e5                mov    %rsp,%rbp
>        5d                      pop    %rbp
>        c3                      retq
> 
> <__swab32p> (43 copies, ~560 calls):
>        55                      push   %rbp
>        89 f8                   mov    %edi,%eax
>        0f c8                   bswap  %eax
>        48 89 e5                mov    %rsp,%rbp
>        5d                      pop    %rbp
>        c3                      retq
> 
> <__swab64p> (21 copies, 119 calls):
>        55                      push   %rbp
>        48 89 f8                mov    %rdi,%rax
>        48 0f c8                bswap  %rax
>        48 89 e5                mov    %rsp,%rbp
>        5d                      pop    %rbp
>        c3                      retq
> 
> <__swab32s> (6 copies, 47 calls):
>        8b 07                   mov    (%rdi),%eax
>        55                      push   %rbp
>        48 89 e5                mov    %rsp,%rbp
>        0f c8                   bswap  %eax
>        89 07                   mov    %eax,(%rdi)
>        5d                      pop    %rbp
>        c3                      retq
> 
> This patch fixes this via s/inline/__always_inline/.
> Code size decrease after the patch is ~4.5k:
> 
>     text     data      bss       dec     hex filename
> 92202377 20826112 36417536 149446025 8e85d89 vmlinux
> 92197848 20826112 36417536 149441496 8e84bd8 vmlinux5_swap_after

Acked-by: Ingo Molnar <mingo@...nel.org>

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ