lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <B2980B8C-9440-4BF6-968B-EA8930B10BA4@zytor.com>
Date:	Thu, 21 Jan 2016 14:22:28 -0800
From:	"H. Peter Anvin" <hpa@...or.com>
To:	Borislav Petkov <bp@...e.de>
CC:	Andy Lutomirski <luto@...capital.net>,
	Brian Gerst <brgerst@...il.com>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...nel.org>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] x86: static_cpu_has_safe: discard dynamic check after init

On January 21, 2016 2:14:42 PM PST, Borislav Petkov <bp@...e.de> wrote:
>On Wed, Jan 20, 2016 at 02:41:22AM -0800, H. Peter Anvin wrote:
>> Ah. What would be even more of a win would be to rebias
>> static_cpu_has_bug() so that the fallthrough case is the functional
>> one. Easily done by reversing the labels.
>
>By reversing you mean this:
>
>
>---
>diff --git a/arch/x86/include/asm/cpufeature.h
>b/arch/x86/include/asm/cpufeature.h
>index 77c51f4c15b7..49fa56f2b083 100644
>--- a/arch/x86/include/asm/cpufeature.h
>+++ b/arch/x86/include/asm/cpufeature.h
>@@ -174,10 +174,10 @@ static __always_inline __pure bool
>_static_cpu_has(u16 bit)
>                             [bitnum] "i" (1 << (bit & 7)),
>[cap_word] "m" (((const char *)boot_cpu_data.x86_capability)[bit >> 3])
>                         : : t_yes, t_no);
>-       t_yes:
>-               return true;
>        t_no:
>                return false;
>+       t_yes:
>+               return true;
> #else
>                return boot_cpu_has(bit);
> #endif /* CC_HAVE_ASM_GOTO */
>---
>
>?
>
>In any case, here's what happens with the current patchset:
>
>vmlinux:
>
>ffffffff8100472a:       e9 50 0e de 00          jmpq   ffffffff81de557f
><__alt_instructions_end+0x7aa>
>ffffffff8100472f:       66 8c d0                mov    %ss,%ax
>ffffffff81004732:       66 83 f8 18             cmp    $0x18,%ax
>ffffffff81004736:       74 07                   je     ffffffff8100473f
><__switch_to+0x2ef>
>ffffffff81004738:       b8 18 00 00 00          mov    $0x18,%eax
>ffffffff8100473d:       8e d0                   mov    %eax,%ss
>ffffffff8100473f:       48 83 c4 18             add    $0x18,%rsp
>ffffffff81004743:       4c 89 e0                mov    %r12,%rax
>ffffffff81004746:       5b                      pop    %rbx
>ffffffff81004747:       41 5c                   pop    %r12
>ffffffff81004749:       41 5d                   pop    %r13
>ffffffff8100474b:       41 5e                   pop    %r14
>ffffffff8100474d:       41 5f                   pop    %r15
>ffffffff8100474f:       5d                      pop    %rbp
>ffffffff81004750:       c3                      retq
>
>That first JMP above sends us to the dynamic section which is in asm
>now:
>
>ffffffff81de557f:       f6 05 8f de d1 ff 01    testb 
>$0x1,-0x2e2171(%rip)        # ffffffff81b03415 <boot_cpu_data+0x55>
>ffffffff81de5586:       0f 85 a3 f1 21 ff       jne    ffffffff8100472f
><__switch_to+0x2df>
>ffffffff81de558c:       e9 ae f1 21 ff          jmpq   ffffffff8100473f
><__switch_to+0x2ef>
>
>After X86_FEATURE_ALWAYS patching, that first JMP has become a 2-byte
>JMP:
>
>[    0.306333] apply_alternatives: feat: 3*32+21, old:
>(ffffffff8100472a, len: 5), repl: (ffffffff81de4e12, len: 5), pad: 0
>[    0.308005] ffffffff8100472a: old_insn: e9 50 0e de 00
>[    0.312012] ffffffff81de4e12: rpl_insn: e9 28 f9 21 ff
>[    0.318201] recompute_jump: target RIP: ffffffff8100473f, new_displ:
>0x15
>[    0.320007] recompute_jump: final displ: 0x00000013, JMP
>0xffffffff8100473f
>[    0.324005] ffffffff8100472a: final_insn: eb 13 0f 1f 00
>
>so basically we jump over the %ss fixup:
>
>ffffffff8100472a:	eb 13 0f 1f 00		jmp    ffffffff8100473f
>ffffffff8100472f:       66 8c d0                mov    %ss,%ax
>ffffffff81004732:       66 83 f8 18             cmp    $0x18,%ax
>ffffffff81004736:       74 07                   je     ffffffff8100473f
><__switch_to+0x2ef>
>ffffffff81004738:       b8 18 00 00 00          mov    $0x18,%eax
>ffffffff8100473d:       8e d0                   mov    %eax,%ss
>ffffffff8100473f:       48 83 c4 18             add   
>$0x18,%rsp		<----
>ffffffff81004743:       4c 89 e0                mov    %r12,%rax
>ffffffff81004746:       5b                      pop    %rbx
>ffffffff81004747:       41 5c                   pop    %r12
>ffffffff81004749:       41 5d                   pop    %r13
>ffffffff8100474b:       41 5e                   pop    %r14
>ffffffff8100474d:       41 5f                   pop    %r15
>ffffffff8100474f:       5d                      pop    %rbp
>ffffffff81004750:       c3                      retq
>
>
>After X86_BUG_SYSRET_SS_ATTRS patching:
>
>[    0.330367] apply_alternatives: feat: 16*32+8, old:
>(ffffffff8100472a, len: 5), repl: (ffffffff81de3996, len: 0), pad: 0
>[    0.332005] ffffffff8100472a: old_insn: eb 13 0f 1f 00
>[    0.338332] ffffffff8100472a: final_insn: 0f 1f 44 00 00
>
>ffffffff8100472a:	0f 1f 44 00 00          nop
>ffffffff8100472f:       66 8c d0                mov    %ss,%ax
>ffffffff81004732:       66 83 f8 18             cmp    $0x18,%ax
>ffffffff81004736:       74 07                   je     ffffffff8100473f
><__switch_to+0x2ef>
>ffffffff81004738:       b8 18 00 00 00          mov    $0x18,%eax
>ffffffff8100473d:       8e d0                   mov    %eax,%ss
>ffffffff8100473f:       48 83 c4 18             add    $0x18,%rsp
>ffffffff81004743:       4c 89 e0                mov    %r12,%rax
>ffffffff81004746:       5b                      pop    %rbx
>ffffffff81004747:       41 5c                   pop    %r12
>ffffffff81004749:       41 5d                   pop    %r13
>ffffffff8100474b:       41 5e                   pop    %r14
>ffffffff8100474d:       41 5f                   pop    %r15
>ffffffff8100474f:       5d                      pop    %rbp
>ffffffff81004750:       c3                      retq
>
>So the penalty for the !X86_BUG_SYSRET_SS_ATTRS CPUs is a 2-byte JMP.
>Do
>we care?
>
>In the case we do, we could do this:
>
>	JMP ss_fixup
>ret:
>	RET return prev_p;
>ss_fixup:
>	<fixup SS>
>	jmp ret
>
>and the !X86_BUG_SYSRET_SS_ATTRS CPUs would overwrite that
>"JMP ss_fixup" with a NOP and they're fine. However, the
>X86_BUG_SYSRET_SS_ATTRS CPUs will have to do two jumps, one to the
>fixup
>code and one back to RET.
>
>Now, how about I convert
>
>                unsigned short ss_sel;
>                savesegment(ss, ss_sel);
>                if (ss_sel != __KERNEL_DS)
>                        loadsegment(ss, __KERNEL_DS);
>
>into asm and into an alternative()?
>
>Then, the !X86_BUG_SYSRET_SS_ATTRS CPUs will trade off that JMP with a
>bunch of NOPs which will pollute I$.
>
>Hmmm.

Yes, having t_no as the fallthrough case ought to move the yes code out of line.

The current code probably pollutes the i$ too.
-- 
Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ