[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240228230616.GS13330@nvidia.com>
Date: Wed, 28 Feb 2024 19:06:16 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: Alexander Gordeev <agordeev@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
"H. Peter Anvin" <hpa@...or.com>,
Justin Stitt <justinstitt@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Leon Romanovsky <leon@...nel.org>,
linux-rdma@...r.kernel.org, linux-s390@...r.kernel.org,
llvm@...ts.linux.dev, Ingo Molnar <mingo@...hat.com>,
Bill Wendling <morbo@...gle.com>,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>, netdev@...r.kernel.org,
Paolo Abeni <pabeni@...hat.com>,
Salil Mehta <salil.mehta@...wei.com>,
Jijie Shao <shaojijie@...wei.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org,
Yisen Zhuang <yisen.zhuang@...wei.com>,
Arnd Bergmann <arnd@...db.de>,
Leon Romanovsky <leonro@...lanox.com>, linux-arch@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
Mark Rutland <mark.rutland@....com>,
Michael Guralnik <michaelgur@...lanox.com>, patches@...ts.linux.dev,
Niklas Schnelle <schnelle@...ux.ibm.com>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH 4/6] arm64/io: Provide a WC friendly __iowriteXX_copy()
On Tue, Feb 27, 2024 at 10:37:18AM +0000, Catalin Marinas wrote:
> On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> > +/*
> > + * This generates a memcpy that works on a from/to address which is aligned to
> > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > + */
> > +#define memcpy_toio_aligned(to, from, count, bits) \
> > + ({ \
> > + volatile u##bits __iomem *_to = to; \
> > + const u##bits *_from = from; \
> > + size_t _count = count; \
> > + const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > + \
> > + for (; _from < _end_from; _from += 8, _to += 8) \
> > + __const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > + if ((_count % 8) >= 4) { \
> > + __const_memcpy_toio_aligned##bits(_to, _from, 4); \
> > + _from += 4; \
> > + _to += 4; \
> > + } \
> > + if ((_count % 4) >= 2) { \
> > + __const_memcpy_toio_aligned##bits(_to, _from, 2); \
> > + _from += 2; \
> > + _to += 2; \
> > + } \
> > + if (_count % 2) \
> > + __const_memcpy_toio_aligned##bits(_to, _from, 1); \
> > + })
>
> Do we actually need all this if count is not constant? If it's not
> performance critical anywhere, I'd rather copy the generic
> implementation, it's easier to read.
Which generic version?
The point is to maximize WC effects with non-constant values, so I
think we do need something like this. ie we can't just fall back to
looping over 64 bit stores one at a time.
If we don't use the large block stores we know we get very poor WC
behavior. So at least the 8 and 4 constant value sections are
needed. At that point you may as well just do 4 and 2 instead of
another loop.
Most places I know about using this are performance paths, the entire
iocopy infrastructure was introduced as an x86 performance
optimization..
Jason
Powered by blists - more mailing lists