lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Sep 2020 08:24:46 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Dave Jiang' <dave.jiang@...el.com>,
        "vkoul@...nel.org" <vkoul@...nel.org>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
        "tony.luck@...el.com" <tony.luck@...el.com>,
        "jing.lin@...el.com" <jing.lin@...el.com>,
        "ashok.raj@...el.com" <ashok.raj@...el.com>,
        "sanjay.k.kumar@...el.com" <sanjay.k.kumar@...el.com>,
        "fenghua.yu@...el.com" <fenghua.yu@...el.com>,
        "kevin.tian@...el.com" <kevin.tian@...el.com>,
        "dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v5 1/5] x86/asm: Carve out a generic movdir64b() helper
 for general usage

From: Dave Jiang
> Sent: 24 September 2020 00:11
>
> The MOVDIR64B instruction can be used by other wrapper instructions. Move
> the asm code to special_insns.h and have iosubmit_cmds512() call the
> asm function.
> 
> Signed-off-by: Dave Jiang <dave.jiang@...el.com>
> Reviewed-by: Tony Luck <tony.luck@...el.com>
> ---
>  arch/x86/include/asm/io.h            |   17 +++--------------
>  arch/x86/include/asm/special_insns.h |   19 +++++++++++++++++++
>  2 files changed, 22 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
> index e1aa17a468a8..d726459d08e5 100644
> --- a/arch/x86/include/asm/io.h
> +++ b/arch/x86/include/asm/io.h
...
> diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
> index 59a3e13204c3..2a5abd27bb86 100644
> --- a/arch/x86/include/asm/special_insns.h
> +++ b/arch/x86/include/asm/special_insns.h
> @@ -234,6 +234,25 @@ static inline void clwb(volatile void *__p)
> 
>  #define nop() asm volatile ("nop")
> 
> +/* The dst parameter must be 64-bytes aligned */
> +static inline void movdir64b(void *dst, const void *src)
> +{
> +	/*
> +	 * Note that this isn't an "on-stack copy", just definition of "dst"
> +	 * as a pointer to 64-bytes of stuff that is going to be overwritten.
> +	 * In the MOVDIR64B case that may be needed as you can use the
> +	 * MOVDIR64B instruction to copy arbitrary memory around. This trick
> +	 * lets the compiler know how much gets clobbered.
> +	 */
> +	volatile struct { char _[64]; } *__dst = dst;
> +
> +	/* MOVDIR64B [rdx], rax */
> +	asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
> +		     :
> +		     : "m" (*(struct { char _[64];} **)src), "a" (__dst)
> +		     : "memory");
> +}
> +
>  #endif /* __KERNEL__ */

You've lost the "d" (src).
You don't need the 'memory' clobber, just:

static inline void movdir64b(void *dst, const void *src)
{
	/*
	 * 64 bytes from dst are marked as modified for completeness.
	 * Since the writes bypass the cache later reads may return
	 * old data anyway.
	 */
	/* MOVDIR64B [rdx], rax */
	asm volatile (".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
	     : "=m" ((struct { char _[64];} *)dst),
	     : "m" ((struct { char _[64];} *)src), "d" (src), "a" (dst));
}

I've checked that the "m" constraint on src does force (at least one
version of) gcc to actually write to the supplied buffer.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ