[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFUsyfLUQLj5py1AQ+4NptM6htWxV5i0qxkeXDUdFPfAnqRY2w@mail.gmail.com>
Date: Fri, 19 Nov 2021 18:05:56 -0600
From: Noah Goldstein <goldstein.w.n@...il.com>
To: David Laight <David.Laight@...lab.com>
Cc: "tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>,
"hpa@...or.com" <hpa@...or.com>,
"luto@...nel.org" <luto@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4] arch/x86: Improve 'rep movs{b|q}' usage in memmove_64.S
On Fri, Nov 19, 2021 at 4:31 PM David Laight <David.Laight@...lab.com> wrote:
>
> From: Noah Goldstein
> > Sent: 17 November 2021 22:45
> >
> > On Wed, Nov 17, 2021 at 4:31 PM David Laight <David.Laight@...lab.com> wrote:
> > >
> > > From: Noah Goldstein
> > > > Sent: 17 November 2021 21:03
> > > >
> > > > Add check for "short distance movsb" for forwards FSRM usage and
> > > > entirely remove backwards 'rep movsq'. Both of these usages hit "slow
> > > > modes" that are an order of magnitude slower than usual.
> > > >
> > > > 'rep movsb' has some noticeable VERY slow modes that the current
> > > > implementation is either 1) not checking for or 2) intentionally
> > > > using.
> > >
> > > How does this relate to the decision that glibc made a few years
> > > ago to use backwards 'rep movs' for non-overlapping copies?
> >
> > GLIBC doesn't use backwards `rep movs`. Since the regions are
> > non-overlapping it just uses forward copy. Backwards `rep movs` is
> > from setting the direction flag (`std`) and is a very slow byte
> > copy. For overlapping regions where backwards copy is necessary GLIBC
> > uses 4x vec copy loop.
>
> Try to find this commit 6fb8cbcb58a29fff73eb2101b34caa19a7f88eba
>
> Or follow links from https://www.win.tue.nl/~aeb/linux/misc/gcc-semibug.html
> But I can't find the actual patch.
>
> The claims were a massive performance increase for the reverse copy.
>
I don't think that's referring to optimizations around `rep movs`. It
appears to be referring to fallout from this patch:
https://sourceware.org/git/?p=glibc.git;a=commit;h=6fb8cbcb58a29fff73eb2101b34caa19a7f88eba
which broken programs misusing `memcpy` with overlapping regions
resulting in this fix:
https://sourceware.org/git/?p=glibc.git;a=commit;h=0354e355014b7bfda32622e0255399d859862fcd
AFAICT support for ERMS was only added around:
https://sourceware.org/git/?p=glibc.git;a=commit;h=13efa86ece61bf84daca50cab30db1b0902fe2db
Either way GLIBC memcpy/memmove moment most certainly does not
use backwards `rep movs`:
https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S;hb=HEAD#l655
as it is very slow.
> The pdf from www.agner.org/optimize may well indicate why some
> copies are unexpectedly slow due to cache access aliasing.
Even in the `4k` aliasing case `rep movsb` seems to stay within a
factor of 2 of optimal whereas the `std` backwards `rep movs` loses
by a factor of 10.
Either way, `4k` aliasing detection is mostly a concern of `memcpy` as
the direction of copy for `memmove` is a correctness question, not
an optimization.
>
> I'm pretty sure that Intel cpu (possibly from Ivy bridge onwards)
> can be persuaded to copy 8 bytes/clock for in-cache data with
> a fairly simple loop that contains 2 reads (maybe misaligned)
> and two writes (so 16 bytes per iteration).
> Extra unrolling just adds extra code top and bottom.
>
> You might want a loop like:
> 1: mov 0(%rsi, %rcx),%rax
> mov 8(%rsi, %rcx),%rdx
> mov %rax, 0(%rdi, %rcx)
> mov %rdx, 8(%rdi, %rcx)
> add $16, %rcx
> jnz 1b
>
> David
The backwards loop already has 4x unrolled `movq` loop.
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
Powered by blists - more mailing lists