lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200107184003.GK29542@zn.tnic>
Date:   Tue, 7 Jan 2020 19:40:03 +0100
From:   Borislav Petkov <bp@...en8.de>
To:     Tony Luck <tony.luck@...el.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/cpufeatures: Add support for fast short rep mov

On Mon, Dec 16, 2019 at 01:42:54PM -0800, Tony Luck wrote:
> From the Intel Optimization Reference Manual:
> 
> 3.7.6.1 Fast Short REP MOVSB
> Beginning with processors based on Ice Lake Client microarchitecture,
> REP MOVSB performance of short operations is enhanced. The enhancement
> applies to string lengths between 1 and 128 bytes long.  Support for
> fast-short REP MOVSB is enumerated by the CPUID feature flag: CPUID
> [EAX=7H, ECX=0H).EDX.FAST_SHORT_REP_MOVSB[bit 4] = 1. There is no change
> in the REP STOS performance.
> 
> Add an X86_FEATURE_FSRM flag for this.
> 
> memmove() avoids REP MOVSB for short (< 32 byte) copies. Fix it
> to check FSRM and use REP MOVSB for short copies on systems that
> support it.
> 
> Signed-off-by: Tony Luck <tony.luck@...el.com>
> 
> ---
> 
> Time (cycles) for memmove() sizes 1..31 with neither source nor
> destination in cache.
> 
>   1800 +-+-------+--------+---------+---------+---------+--------+-------+-+
>        +         +        +         +         +         +        +         +
>   1600 +-+                                          'memmove-fsrm' *******-+
>        |   ######                                   'memmove-orig' ####### |
>   1400 +-+ #     #####################                                   +-+
>        |   #                          ############                         |
>   1200 +-+#                                       ##################     +-+
>        |  #                                                                |
>   1000 +-+#                                                              +-+
>        |  #                                                                |
>        | #                                                                 |
>    800 +-#                                                               +-+
>        | #                                                                 |
>    600 +-***********************                                         +-+
>        |                        *****************************              |
>    400 +-+                                                   *******     +-+
>        |                                                                   |
>    200 +-+                                                               +-+
>        +         +        +         +         +         +        +         +
>      0 +-+-------+--------+---------+---------+---------+--------+-------+-+
>        0         5        10        15        20        25       30        35

I don't mind this graph being part of the commit message - it shows
nicely the speedup even if with some microbenchmark. Or you're not
adding it just because it is a microbenchmark and not something more
representative?

>  arch/x86/include/asm/cpufeatures.h | 1 +
>  arch/x86/lib/memmove_64.S          | 6 +++---
>  2 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index e9b62498fe75..98c60fa31ced 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -357,6 +357,7 @@
>  /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
>  #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
>  #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
> +#define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
>  #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
>  #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
>  #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
> diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
> index 337830d7a59c..4a23086806e6 100644
> --- a/arch/x86/lib/memmove_64.S
> +++ b/arch/x86/lib/memmove_64.S
> @@ -29,10 +29,7 @@
>  SYM_FUNC_START_ALIAS(memmove)
>  SYM_FUNC_START(__memmove)
>  
> -	/* Handle more 32 bytes in loop */
>  	mov %rdi, %rax
> -	cmp $0x20, %rdx
> -	jb	1f
>  
>  	/* Decide forward/backward copy mode */
>  	cmp %rdi, %rsi
> @@ -43,6 +40,7 @@ SYM_FUNC_START(__memmove)
>  	jg 2f
>  
>  .Lmemmove_begin_forward:
> +	ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM

So the enhancement is for string lengths up to two cachelines. Why
are you limiting this to 32 bytes?

I know, the function handles 32-bytes at a time but what I'd imagine
here is having the fastest variant upfront which does REP; MOVSB for all
lengths since FSRM means fast short strings and ERMS - and I'm strongly
assuming here FSRM *implies* ERMS - means fast "longer" strings, so to
speak, so FSRM would mean fast *all length* strings in the end, no?

Also, does the copy direction influence the FSRM's REP; MOVSB variant's
performance? If not, you can do something like this:

 SYM_FUNC_START_ALIAS(memmove)
 SYM_FUNC_START(__memmove)

	mov %rdi, %rax

	/* FSRM handles all possible string lengths and directions optimally. */
	ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_FSRM

	cmp $0x20, %rdx
	jb 1f
	...

Or?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ