lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 5 Aug 2020 08:54:01 -0700
From:   Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To:     Borislav Petkov <bp@...e.de>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>,
        Andy Lutomirski <luto@...nel.org>, x86@...nel.org,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Dave Hansen <dave.hansen@...el.com>,
        Tony Luck <tony.luck@...el.com>,
        Cathy Zhang <cathy.zhang@...el.com>,
        Fenghua Yu <fenghua.yu@...el.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        Kyung Min Park <kyung.min.park@...el.com>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        linux-kernel@...r.kernel.org,
        Ricardo Neri <ricardo.neri@...el.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        linux-edac@...r.kernel.org
Subject: Re: [PATCH v2] x86/cpu: Use SERIALIZE in sync_core() when available

On Wed, Aug 05, 2020 at 06:48:40AM +0200, Borislav Petkov wrote:
> On Tue, Aug 04, 2020 at 07:10:59PM -0700, Ricardo Neri wrote:
> > The SERIALIZE instruction gives software a way to force the processor to
> > complete all modifications to flags, registers and memory from previous
> > instructions and drain all buffered writes to memory before the next
> > instruction is fetched and executed. Thus, it serves the purpose of
> > sync_core(). Use it when available.
> > 
> > Commit 7117f16bf460 ("objtool: Fix ORC vs alternatives") enforced stack
> > invariance in alternatives. The iret-to-self does not comply with such
> > invariance. Thus, it cannot be used inside alternative code. Instead, use
> > an alternative that jumps to SERIALIZE when available.
> > 
> > Cc: Andy Lutomirski <luto@...nel.org>
> > Cc: Cathy Zhang <cathy.zhang@...el.com>
> > Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> > Cc: Fenghua Yu <fenghua.yu@...el.com>
> > Cc: "H. Peter Anvin" <hpa@...or.com>
> > Cc: Kyung Min Park <kyung.min.park@...el.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: "Ravi V. Shankar" <ravi.v.shankar@...el.com>
> > Cc: Sean Christopherson <sean.j.christopherson@...el.com>
> > Cc: linux-edac@...r.kernel.org
> > Cc: linux-kernel@...r.kernel.org
> > Suggested-by: Andy Lutomirski <luto@...nel.org>
> > Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
> > ---
> > This is a v2 from my initial submission [1]. The first three patches of
> > the series have been merged in Linus' tree. Hence, I am submitting only
> > this patch for review.
> > 
> > [1]. https://lkml.org/lkml/2020/7/27/8
> > 
> > Changes since v1:
> >  * Support SERIALIZE using alternative runtime patching.
> >    (Peter Zijlstra, H. Peter Anvin)
> >  * Added a note to specify which version of binutils supports SERIALIZE.
> >    (Peter Zijlstra)
> >  * Verified that (::: "memory") is used. (H. Peter Anvin)
> > ---
> >  arch/x86/include/asm/special_insns.h |  2 ++
> >  arch/x86/include/asm/sync_core.h     | 10 +++++++++-
> >  2 files changed, 11 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
> > index 59a3e13204c3..25cd67801dda 100644
> > --- a/arch/x86/include/asm/special_insns.h
> > +++ b/arch/x86/include/asm/special_insns.h
> > @@ -10,6 +10,8 @@
> >  #include <linux/irqflags.h>
> >  #include <linux/jump_label.h>
> >  
> > +/* Instruction opcode for SERIALIZE; supported in binutils >= 2.35. */
> > +#define __ASM_SERIALIZE ".byte 0xf, 0x1, 0xe8"
> >  /*
> >   * Volatile isn't enough to prevent the compiler from reordering the
> >   * read/write functions for the control registers and messing everything up.
> > diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h
> > index fdb5b356e59b..201ea3d9a6bd 100644
> > --- a/arch/x86/include/asm/sync_core.h
> > +++ b/arch/x86/include/asm/sync_core.h
> > @@ -5,15 +5,19 @@
> >  #include <linux/preempt.h>
> >  #include <asm/processor.h>
> >  #include <asm/cpufeature.h>
> > +#include <asm/special_insns.h>
> >  
> >  #ifdef CONFIG_X86_32
> >  static inline void iret_to_self(void)
> >  {
> >  	asm volatile (
> > +		ALTERNATIVE("", "jmp 2f", X86_FEATURE_SERIALIZE)
> >  		"pushfl\n\t"
> >  		"pushl %%cs\n\t"
> >  		"pushl $1f\n\t"
> >  		"iret\n\t"
> > +		"2:\n\t"
> > +		__ASM_SERIALIZE "\n"
> >  		"1:"
> >  		: ASM_CALL_CONSTRAINT : : "memory");
> >  }
> > @@ -23,6 +27,7 @@ static inline void iret_to_self(void)
> >  	unsigned int tmp;
> >  
> >  	asm volatile (
> > +		ALTERNATIVE("", "jmp 2f", X86_FEATURE_SERIALIZE)
> 
> Why is this and above stuck inside the asm statement?
> 
> Why can't you simply do:
> 
> 	if (static_cpu_has(X86_FEATURE_SERIALIZE)) {
> 		asm volatile(__ASM_SERIALIZE ::: "memory");
> 		return;
> 	}
> 
> on function entry instead of making it more unreadable for no particular
> reason?

My my first submission I had implemented it as you describe. The only
difference was that I used boot_cpu_has() instead of static_cpu_has()
as the latter has a comment stating that:
	"Use static_cpu_has() only in fast paths (...) boot_cpu_has() is
	 already fast enough for the majority of cases..."

sync_core_before_usermode() already handles what I think are the
critical paths.

Thanks and BR,
Ricardo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ