lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 11 Dec 2016 22:50:16 -0800 From: tip-bot for Peter Zijlstra <tipbot@...or.com> To: linux-tip-commits@...r.kernel.org Cc: mingo@...nel.org, akataria@...are.com, xiaolong.ye@...el.com, tglx@...utronix.de, xinhui.pan@...ux.vnet.ibm.com, bp@...en8.de, linux-kernel@...r.kernel.org, peterz@...radead.org, rusty@...tcorp.com.au, hpa@...or.com, pbonzini@...hat.com, jeremy@...p.org, chrisw@...s-sol.org, torvalds@...ux-foundation.org Subject: [tip:locking/core] x86/paravirt: Fix bool return type for PVOP_CALL() Commit-ID: 11f254dbb3a2e3f0d8552d0dd37f4faa432b6b16 Gitweb: http://git.kernel.org/tip/11f254dbb3a2e3f0d8552d0dd37f4faa432b6b16 Author: Peter Zijlstra <peterz@...radead.org> AuthorDate: Thu, 8 Dec 2016 16:42:15 +0100 Committer: Ingo Molnar <mingo@...nel.org> CommitDate: Sun, 11 Dec 2016 13:09:20 +0100 x86/paravirt: Fix bool return type for PVOP_CALL() Commit: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()") introduced a paravirt op with bool return type [*] It turns out that the PVOP_CALL*() macros miscompile when rettype is bool. Code that looked like: 83 ef 01 sub $0x1,%edi ff 15 32 a0 d8 00 callq *0xd8a032(%rip) # ffffffff81e28120 <pv_lock_ops+0x20> 84 c0 test %al,%al ended up looking like so after PVOP_CALL1() was applied: 83 ef 01 sub $0x1,%edi 48 63 ff movslq %edi,%rdi ff 14 25 20 81 e2 81 callq *0xffffffff81e28120 48 85 c0 test %rax,%rax Note how it tests the whole of %rax, even though a typical bool return function only sets %al, like: 0f 95 c0 setne %al c3 retq This is because ____PVOP_CALL() does: __ret = (rettype)__eax; and while regular integer type casts truncate the result, a cast to bool tests for any !0 value. Fix this by explicitly truncating to sizeof(rettype) before casting. [*] The actual bug should've been exposed in commit: 446f3dc8cc0a ("locking/core, x86/paravirt: Implement vcpu_is_preempted(cpu) for KVM and Xen guests") but that didn't properly implement the paravirt call. Reported-by: kernel test robot <xiaolong.ye@...el.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> Cc: Alok Kataria <akataria@...are.com> Cc: Borislav Petkov <bp@...en8.de> Cc: Chris Wright <chrisw@...s-sol.org> Cc: Jeremy Fitzhardinge <jeremy@...p.org> Cc: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com> Cc: Paolo Bonzini <pbonzini@...hat.com> Cc: Peter Anvin <hpa@...or.com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Rusty Russell <rusty@...tcorp.com.au> Cc: Thomas Gleixner <tglx@...utronix.de> Fixes: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()") Link: http://lkml.kernel.org/r/20161208154349.346057680@infradead.org Signed-off-by: Ingo Molnar <mingo@...nel.org> --- arch/x86/include/asm/paravirt_types.h | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 2614bd7..3f2bc0f 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -510,6 +510,18 @@ int paravirt_disable_iospace(void); #define PVOP_TEST_NULL(op) ((void)op) #endif +#define PVOP_RETMASK(rettype) \ + ({ unsigned long __mask = ~0UL; \ + switch (sizeof(rettype)) { \ + case 1: __mask = 0xffUL; break; \ + case 2: __mask = 0xffffUL; break; \ + case 4: __mask = 0xffffffffUL; break; \ + default: break; \ + } \ + __mask; \ + }) + + #define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr, \ pre, post, ...) \ ({ \ @@ -537,7 +549,7 @@ int paravirt_disable_iospace(void); paravirt_clobber(clbr), \ ##__VA_ARGS__ \ : "memory", "cc" extra_clbr); \ - __ret = (rettype)__eax; \ + __ret = (rettype)(__eax & PVOP_RETMASK(rettype)); \ } \ __ret; \ })
Powered by blists - more mailing lists