lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 25 Nov 2017 11:36:24 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Nadav Amit <namit@...are.com>
cc:     linux-kernel@...r.kernel.org, linux-edac@...r.kernel.org,
        nadav.amit@...il.com, Andy Lutomirski <luto@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
        Tony Luck <tony.luck@...el.com>,
        Borislav Petkov <bp@...en8.de>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH v2 2/2] x86: disable IRQs before changing CR4

On Fri, 24 Nov 2017, Nadav Amit wrote:
>  /* Set in this cpu's CR4. */
> -static inline void cr4_set_bits(unsigned long mask)
> +static inline void cr4_set_bits_irqs_off(unsigned long mask)

This change is kinda weird. I'd expect that there is a corresponding
function cr4_set_bits() which takes care of disabling interrupts. But there
is not. All it does is creating a lot of pointless churn.

>  static __always_inline void setup_smep(struct cpuinfo_x86 *c)
>  static __init int setup_disable_smap(char *arg)
>  static __always_inline void setup_pku(struct cpuinfo_x86 *c)

Why are you not doing this at the call site around all calls which fiddle
with cr4, i.e. in identify_cpu() ?

identify_cpu() is called from two places:

    identify_boot_cpu() and identify_secondary_cpu()

identify_secondary_cpu is called with interrupts disabled anyway and there
is no reason why we can't enforce interrupts being disabled around
identify_cpu() completely.

But if we actually do the right thing, i.e. having cr4_set_bit() and
cr4_set_bit_irqsoff() all of this churn goes away magically.

Then the only place which needs to be changed is the context switch because
here interrupts are already disabled and we really care about performance.

> @@ -293,7 +303,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
>  	}
>  
>  	if ((tifp ^ tifn) & _TIF_NOTSC)
> -		cr4_toggle_bits(X86_CR4_TSD);
> +		cr4_toggle_bits_irqs_off(X86_CR4_TSD);
>  

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ