[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0eb1d644d233432b8e62d87bd470d7d5@AcuMS.aculab.com>
Date: Fri, 30 Aug 2019 09:56:38 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Linus Torvalds' <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>
CC: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Linux List Kernel Mailing" <linux-kernel@...r.kernel.org>,
syzbot <syzbot+8ab2d0f39fb79fe6ca40@...kaller.appspotmail.com>
Subject: RE: [PATCH] /dev/mem: Bail out upon SIGKILL when reading memory.
From: Linus Torvalds
> Sent: 24 August 2019 21:57
> On Sat, Aug 24, 2019 at 1:22 PM Ingo Molnar <mingo@...nel.org> wrote:
> >
> > That makes sense: I measured 17 seconds per 100 MB of data, which is is
> > 0.16 usecs per byte. The instruction used by
> > copy_user_enhanced_fast_string() is REP MOVSB - which supposedly goes as
> > high as cacheline size accesses - but perhaps those get broken down for
> > physical memory that has no device claiming it?
>
> All the "rep string" optimizations are _only_ done for regular memory.
More likely for pages that uses the data cache.
It ought to be possible to map PCIe memory through the data cache.
With care that would allow longer TLPs (especially read TLP) for
cpu 'pio' buffer transfers.
> When it hits any IO accesses, it will do the accesses at the specified
> size (so "movsb" will do it a byte at a time).
>
> 0.16 usec per byte is faster than the traditional ISA 'inb', but not
> by a huge factor.
That speed depends on the target, IIRC our fpga target takes 128 clocks
at 62.5MHz to process a PCIe read request.
None of the current Intel x86 cpus will issue multiple read TLP from
a single core, so reads never get any overlapping and suffer the
full latency.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists