lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181005093108.GA24723@gmail.com>
Date:   Fri, 5 Oct 2018 11:31:08 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Nadav Amit <namit@...are.com>
Cc:     "hpa@...or.com" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Jan Beulich <JBeulich@...e.com>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Andy Lutomirski <luto@...nel.org>
Subject: Re: [PATCH v9 04/10] x86: refcount: prevent gcc distortions


* Nadav Amit <namit@...are.com> wrote:

> > Are you using defconfig or a reasonable distro-config for your tests?
> 
> I think it is best to take the kernel and run localyesconfig for testing.

Ok, agreed - and this makes the numbers you provided pretty representative.

Good - now that all of my concerns were addressed I'd like to merge the remaining 3 patches as 
well - but they are conflicting with ongoing x86 work in tip:x86/core. The extable conflict is 
trivial, the jump-label conflict a bit more involved.

Could you please pick up the updated changelogs below and resolve the conflicts against 
tip:master or tip:x86/build and submit the remaining patches as well?

Thanks,

	Ingo



=============>
commit b82b0b611740c7c88050ba743c398af7eb920029
Author: Nadav Amit <namit@...are.com>
Date:   Wed Oct 3 14:31:00 2018 -0700

    x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs
    
    As described in:
    
      77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
    
    GCC's inlining heuristics are broken with common asm() patterns used in
    kernel code, resulting in the effective disabling of inlining.
    
    The workaround is to set an assembly macro and call it from the inline
    assembly block - which is also a minor cleanup for the jump-label code.
    
    As a result the code size is slightly increased, but inlining decisions
    are better:
    
          text     data     bss      dec     hex  filename
      18163528 10226300 2957312 31347140 1de51c4  ./vmlinux before
      18163608 10227348 2957312 31348268 1de562c  ./vmlinux after (+1128)
    
    And functions such as intel_pstate_adjust_policy_max(),
    kvm_cpu_accept_dm_intr(), kvm_register_readl() are inlined.
    
    Tested-by: Kees Cook <keescook@...omium.org>
    Signed-off-by: Nadav Amit <namit@...are.com>
    Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
    Cc: Andy Lutomirski <luto@...capital.net>
    Cc: Borislav Petkov <bp@...en8.de>
    Cc: Brian Gerst <brgerst@...il.com>
    Cc: Denys Vlasenko <dvlasenk@...hat.com>
    Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
    Cc: H. Peter Anvin <hpa@...or.com>
    Cc: Kate Stewart <kstewart@...uxfoundation.org>
    Cc: Linus Torvalds <torvalds@...ux-foundation.org>
    Cc: Peter Zijlstra <peterz@...radead.org>
    Cc: Philippe Ombredanne <pombredanne@...b.com>
    Cc: Thomas Gleixner <tglx@...utronix.de>
    Link: http://lkml.kernel.org/r/20181003213100.189959-11-namit@vmware.com
    Signed-off-by: Ingo Molnar <mingo@...nel.org>

commit dfc243615d43bb477d1d16a0064fc3d69ade5b3a
Author: Nadav Amit <namit@...are.com>
Date:   Wed Oct 3 14:30:59 2018 -0700

    x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs
    
    As described in:
    
      77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
    
    GCC's inlining heuristics are broken with common asm() patterns used in
    kernel code, resulting in the effective disabling of inlining.
    
    The workaround is to set an assembly macro and call it from the inline
    assembly block - which is pretty pointless indirection in the static_cpu_has()
    case, but is worth it to improve overall inlining quality.
    
    The patch slightly increases the kernel size:
    
          text     data     bss      dec     hex  filename
      18162879 10226256 2957312 31346447 1de4f0f  ./vmlinux before
      18163528 10226300 2957312 31347140 1de51c4  ./vmlinux after (+693)
    
    And enables the inlining of function such as free_ldt_pgtables().
    
    Tested-by: Kees Cook <keescook@...omium.org>
    Signed-off-by: Nadav Amit <namit@...are.com>
    Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
    Cc: Andy Lutomirski <luto@...capital.net>
    Cc: Borislav Petkov <bp@...en8.de>
    Cc: Brian Gerst <brgerst@...il.com>
    Cc: Denys Vlasenko <dvlasenk@...hat.com>
    Cc: H. Peter Anvin <hpa@...or.com>
    Cc: Linus Torvalds <torvalds@...ux-foundation.org>
    Cc: Peter Zijlstra <peterz@...radead.org>
    Cc: Thomas Gleixner <tglx@...utronix.de>
    Link: http://lkml.kernel.org/r/20181003213100.189959-10-namit@vmware.com
    Signed-off-by: Ingo Molnar <mingo@...nel.org>

commit 4021bdcd351fd63d8d5e74264ee18d09388f0221
Author: Nadav Amit <namit@...are.com>
Date:   Wed Oct 3 14:30:58 2018 -0700

    x86/extable: Macrofy inline assembly code to work around GCC inlining bugs
    
    As described in:
    
      77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
    
    GCC's inlining heuristics are broken with common asm() patterns used in
    kernel code, resulting in the effective disabling of inlining.
    
    The workaround is to set an assembly macro and call it from the inline
    assembly block - which is also a minor cleanup for the exception table
    code.
    
    Text size goes up a bit:
    
          text     data     bss      dec     hex  filename
      18162555 10226288 2957312 31346155 1de4deb  ./vmlinux before
      18162879 10226256 2957312 31346447 1de4f0f  ./vmlinux after (+292)
    
    But this allows the inlining of functions such as nested_vmx_exit_reflected(),
    set_segment_reg(), __copy_xstate_to_user() which is a net benefit.
    
    Tested-by: Kees Cook <keescook@...omium.org>
    Signed-off-by: Nadav Amit <namit@...are.com>
    Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
    Cc: Andy Lutomirski <luto@...capital.net>
    Cc: Borislav Petkov <bp@...en8.de>
    Cc: Brian Gerst <brgerst@...il.com>
    Cc: Denys Vlasenko <dvlasenk@...hat.com>
    Cc: H. Peter Anvin <hpa@...or.com>
    Cc: Josh Poimboeuf <jpoimboe@...hat.com>
    Cc: Linus Torvalds <torvalds@...ux-foundation.org>
    Cc: Peter Zijlstra <peterz@...radead.org>
    Cc: Thomas Gleixner <tglx@...utronix.de>
    Link: http://lkml.kernel.org/r/20181003213100.189959-9-namit@vmware.com
    Signed-off-by: Ingo Molnar <mingo@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ