lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1906141646320.1722@nanos.tec.linutronix.de>
Date:   Fri, 14 Jun 2019 16:57:36 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Kees Cook <keescook@...omium.org>
cc:     Linus Torvalds <torvalds@...ux-foundation.org>, x86@...nel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Dave Hansen <dave.hansen@...el.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] x86/asm: Pin sensitive CR4 bits

On Tue, 4 Jun 2019, Kees Cook wrote:
> ---
> v2:
> - move setup until after CPU feature detection and selection.
> - refactor to use static branches to have atomic enabling.
> - only perform the "or" after a failed check.

Maybe I'm missing something, but

>  static inline unsigned long native_read_cr0(void)
>  {
>  	unsigned long val;
> @@ -74,7 +80,23 @@ static inline unsigned long native_read_cr4(void)
>  
>  static inline void native_write_cr4(unsigned long val)
>  {
> -	asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order));
> +	unsigned long bits_missing = 0;
> +
> +set_register:
> +	if (static_branch_likely(&cr_pinning))
> +		val |= cr4_pinned_bits;

The or happens before the first write and therefore

> +	asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits));
> +
> +	if (static_branch_likely(&cr_pinning)) {
> +		if (unlikely((val & cr4_pinned_bits) != cr4_pinned_bits)) {

this path can never be entered when the function is called proper. Sure,
for the attack scenario the jump directly to the mov cr4 instruction is
caught.

Wouldn't it make sense to catch situations where a regular caller provides
@val with pinned bits unset? I.e. move the OR into this code path after
storing bits_missing.

> +			bits_missing = ~val & cr4_pinned_bits;
> +			goto set_register;
> +		}
> +		/* Warn after we've set the missing bits. */
> +		WARN_ONCE(bits_missing, "CR4 bits went missing: %lx!?\n",
> +			  bits_missing);
> +	}
>  }

Something like this:

	unsigned long bits_missing = 0;

again:
	asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits));

	if (static_branch_likely(&cr_pinning)) {
		if (unlikely((val & cr4_pinned_bits) != cr4_pinned_bits)) {
			bits_missing = ~val & cr4_pinned_bits;
			val |= cr4_pinned_bits;
			goto again;
		}
		/* Warn after we've set the missing bits. */
		WARN_ONCE(bits_missing, "CR4 bits went missing: %lx!?\n",
			  bits_missing);
	}

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ