[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4c0c3ee6cfa84d21a807055bc1aa27b8@AcuMS.aculab.com>
Date: Thu, 16 Nov 2023 10:07:35 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Linus Torvalds' <torvalds@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>
CC: David Howells <dhowells@...hat.com>,
kernel test robot <oliver.sang@...el.com>,
"oe-lkp@...ts.linux.dev" <oe-lkp@...ts.linux.dev>,
"lkp@...el.com" <lkp@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Christian Brauner <brauner@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Jens Axboe <axboe@...nel.dk>, Christoph Hellwig <hch@....de>,
Christian Brauner <christian@...uner.io>,
Matthew Wilcox <willy@...radead.org>,
"ying.huang@...el.com" <ying.huang@...el.com>,
"feng.tang@...el.com" <feng.tang@...el.com>,
"fengwei.yin@...el.com" <fengwei.yin@...el.com>
Subject: RE: [linus:master] [iov_iter] c9eec08bac: vm-scalability.throughput
-16.9% regression
From: Linus Torvalds
> Sent: 15 November 2023 20:07
...
> - our current "memcpy_orig" fallback does unrolled copy loops, and
> the rep_movs_alternative fallback obviously doesn't.
>
> It's not clear that the unrolled copy loops matter for the in-kernel
> kinds of copies, but who knows. The memcpy_orig code is definitely
> trying to be smarter in some other ways too. So the fallback should
> try a *bit* harder than I did, and not just with the whole "don't try
> to handle exceptions" issue I mentioned.
I'm pretty sure the unrolled copy (and other unrolled loops)
just wastes I-cache and slows things down cold-cache.
With out of order execute on most x86 cpu (except atoms) you
don't really have to worry about the memory latency.
So get the loop control instructions to run in parallel with
the memory access ones and you can copy one word every clock.
I never managed a single clock loop, but you can get a two
clock loop (with 2 reads and 2 writes in it).
So unrolling once is typically enough.
You can also ignore alignment, the extra cost is minimal (on
Intel cpu at least). I think it requires an extra u-op when
the copy crosses a cache line boundadry.
On haswell (which is now quite old) both 'rep movsb' and
'rep movsq' copy 16 bytes/clock unless the destination
is 32 byte aligned when they copy 32 bytes/clock.
Source alignment make no different, neither does byte
alignment.
Another -Os stupidity is 'push $x; pop %reg' to load
a signed byte constant.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists