lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d335e8701334a15b220b75d49b98d77@AcuMS.aculab.com>
Date: Thu, 22 Feb 2024 22:05:04 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Jason Gunthorpe' <jgg@...dia.com>, Alexander Gordeev
	<agordeev@...ux.ibm.com>, Andrew Morton <akpm@...ux-foundation.org>,
	Christian Borntraeger <borntraeger@...ux.ibm.com>, Borislav Petkov
	<bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, "David S. Miller"
	<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Gerald Schaefer
	<gerald.schaefer@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>, "Heiko
 Carstens" <hca@...ux.ibm.com>, "H. Peter Anvin" <hpa@...or.com>, Justin Stitt
	<justinstitt@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Leon Romanovsky
	<leon@...nel.org>, "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
	"linux-s390@...r.kernel.org" <linux-s390@...r.kernel.org>,
	"llvm@...ts.linux.dev" <llvm@...ts.linux.dev>, Ingo Molnar
	<mingo@...hat.com>, Bill Wendling <morbo@...gle.com>, Nathan Chancellor
	<nathan@...nel.org>, Nick Desaulniers <ndesaulniers@...gle.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>, Paolo Abeni
	<pabeni@...hat.com>, Salil Mehta <salil.mehta@...wei.com>, Jijie Shao
	<shaojijie@...wei.com>, Sven Schnelle <svens@...ux.ibm.com>, Thomas Gleixner
	<tglx@...utronix.de>, "x86@...nel.org" <x86@...nel.org>, Yisen Zhuang
	<yisen.zhuang@...wei.com>
CC: Arnd Bergmann <arnd@...db.de>, Catalin Marinas <catalin.marinas@....com>,
	Leon Romanovsky <leonro@...lanox.com>, "linux-arch@...r.kernel.org"
	<linux-arch@...r.kernel.org>, "linux-arm-kernel@...ts.infradead.org"
	<linux-arm-kernel@...ts.infradead.org>, Mark Rutland <mark.rutland@....com>,
	Michael Guralnik <michaelgur@...lanox.com>, "patches@...ts.linux.dev"
	<patches@...ts.linux.dev>, Niklas Schnelle <schnelle@...ux.ibm.com>, "Will
 Deacon" <will@...nel.org>
Subject: RE: [PATCH 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy()

From: Jason Gunthorpe
> Sent: 21 February 2024 01:17
> 
> The kernel provides driver support for using write combining IO memory
> through the __iowriteXX_copy() API which is commonly used as an optional
> optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.
> 
...
> Implement __iowrite32/64_copy() specifically for ARM64 and use inline
> assembly to build consecutive blocks of STR instructions. Provide direct
> support for 64/32/16 large TLP generation in this manner. Optimize for
> common constant lengths so that the compiler can directly inline the store
> blocks.
...
> +/*
> + * This generates a memcpy that works on a from/to address which is aligned to
> + * bits. Count is in terms of the number of bits sized quantities to copy. It
> + * optimizes to use the STR groupings when possible so that it is WC friendly.
> + */
> +#define memcpy_toio_aligned(to, from, count, bits)                        \
> +	({                                                                \
> +		volatile u##bits __iomem *_to = to;                       \
> +		const u##bits *_from = from;                              \
> +		size_t _count = count;                                    \
> +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> +                                                                          \
> +		for (; _from < _end_from; _from += 8, _to += 8)           \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> +		if ((_count % 8) >= 4) {    

If (_count & 4) {
                              \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
> +			_from += 4;                                       \
> +			_to += 4;                                         \
> +		}                                                         \
> +		if ((_count % 4) >= 2) {                                  \
Ditto
> +			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
> +			_from += 2;                                       \
> +			_to += 2;                                         \
> +		}                                                         \
> +		if (_count % 2)                                           \
and again
> +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> +	})

But that looks bit a bit large to be inlined.
Except, perhaps, for small constant lengths.
I'd guess that even with write-combining and posted PCIe writes it
doesn't take much for it to be PCIe limited rather than cpu limited?

Is there a sane way to do the same for reads - they are far worse
than writes.

I solved the problem a few years back on a little ppc by using an on-cpu
DMA controller that could do PCIe master accesses and spinning until
the transfer completed.
But that sort of DMA controller seems uncommon.
We now initiate most of the transfers from the slave (an fpga) - after
writing a suitable/sane dma controller for that end.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ