lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Nov 2021 22:31:08 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Noah Goldstein' <goldstein.w.n@...il.com>
CC:     "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>,
        "hpa@...or.com" <hpa@...or.com>,
        "luto@...nel.org" <luto@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v4] arch/x86: Improve 'rep movs{b|q}' usage in
 memmove_64.S

From: Noah Goldstein
> Sent: 17 November 2021 22:45
> 
> On Wed, Nov 17, 2021 at 4:31 PM David Laight <David.Laight@...lab.com> wrote:
> >
> > From: Noah Goldstein
> > > Sent: 17 November 2021 21:03
> > >
> > > Add check for "short distance movsb" for forwards FSRM usage and
> > > entirely remove backwards 'rep movsq'. Both of these usages hit "slow
> > > modes" that are an order of magnitude slower than usual.
> > >
> > > 'rep movsb' has some noticeable VERY slow modes that the current
> > > implementation is either 1) not checking for or 2) intentionally
> > > using.
> >
> > How does this relate to the decision that glibc made a few years
> > ago to use backwards 'rep movs' for non-overlapping copies?
> 
> GLIBC doesn't use backwards `rep movs`.  Since the regions are
> non-overlapping it just uses forward copy. Backwards `rep movs` is
> from setting the direction flag (`std`) and is a very slow byte
> copy. For overlapping regions where backwards copy is necessary GLIBC
> uses 4x vec copy loop.

Try to find this commit 6fb8cbcb58a29fff73eb2101b34caa19a7f88eba

Or follow links from https://www.win.tue.nl/~aeb/linux/misc/gcc-semibug.html
But I can't find the actual patch.

The claims were a massive performance increase for the reverse copy.

The pdf from www.agner.org/optimize may well indicate why some
copies are unexpectedly slow due to cache access aliasing.

I'm pretty sure that Intel cpu (possibly from Ivy bridge onwards)
can be persuaded to copy 8 bytes/clock for in-cache data with
a fairly simple loop that contains 2 reads (maybe misaligned)
and two writes (so 16 bytes per iteration).
Extra unrolling just adds extra code top and bottom.

You might want a loop like:
	1:	mov	0(%rsi, %rcx),%rax
		mov	8(%rsi, %rcx),%rdx
		mov	%rax, 0(%rdi, %rcx)
		mov	%rdx, 8(%rdi, %rcx)
		add	$16, %rcx
		jnz	1b

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ