lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211104202220.xwm23nbpvwma6wds@treble>
Date:   Thu, 4 Nov 2021 13:22:20 -0700
From:   Josh Poimboeuf <jpoimboe@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     x86@...nel.org, linux-kernel@...r.kernel.org, mark.rutland@....com,
        dvyukov@...gle.com, seanjc@...gle.com, pbonzini@...hat.com,
        mbenes@...e.cz
Subject: Re: [RFC][PATCH 02/22] x86,mmx_32: Remove .fixup usage

On Thu, Nov 04, 2021 at 05:47:31PM +0100, Peter Zijlstra wrote:
> This code puts an exception table entry on the "PREFIX" instruction to
> overwrite it with a jmp.d8 when it triggers an exception. Except of
> course, our code is no longer writable, also SMP.
> 
> Replace it with ALTERNATIVE, the novel
> 
> XXX: arguably we should just delete this code
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  arch/x86/lib/mmx_32.c |   83 ++++++++++++++++----------------------------------
>  1 file changed, 27 insertions(+), 56 deletions(-)
> 
> --- a/arch/x86/lib/mmx_32.c
> +++ b/arch/x86/lib/mmx_32.c
> @@ -50,23 +50,17 @@ void *_mmx_memcpy(void *to, const void *
>  	kernel_fpu_begin_mask(KFPU_387);
>  
>  	__asm__ __volatile__ (
> -		"1: prefetch (%0)\n"		/* This set is 28 bytes */
> -		"   prefetch 64(%0)\n"
> -		"   prefetch 128(%0)\n"
> -		"   prefetch 192(%0)\n"
> -		"   prefetch 256(%0)\n"
> -		"2:  \n"
> -		".section .fixup, \"ax\"\n"
> -		"3: movw $0x1AEB, 1b\n"	/* jmp on 26 bytes */
> -		"   jmp 2b\n"
> -		".previous\n"
> -			_ASM_EXTABLE(1b, 3b)
> -			: : "r" (from));
> +		ALTERNATIVE "",
> +			    "prefetch (%0)\n"
> +			    "prefetch 64(%0)\n"
> +			    "prefetch 128(%0)\n"
> +			    "prefetch 192(%0)\n"
> +			    "prefetch 256(%0)\n", X86_FEATURE_3DNOW

I think this should instead be X86_FEATURE_3DNOWPREFETCH (which isn't
3DNow-specific and should really just be called X86_FEATURE_PREFETCH
anyway)

> +		: : "r" (from));
>  
>  	for ( ; i > 5; i--) {
>  		__asm__ __volatile__ (
> -		"1:  prefetch 320(%0)\n"
> -		"2:  movq (%0), %%mm0\n"
> +		"  movq (%0), %%mm0\n"

Not sure why this prefetch was removed?  It can also be behind an
alternative just like the first one.

-- 
Josh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ