[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e755b2daaf341128cb3b54f36172442@AcuMS.aculab.com>
Date: Wed, 15 May 2019 10:15:07 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Will Deacon' <will.deacon@....com>,
Robin Murphy <robin.murphy@....com>
CC: Zhangshaokun <zhangshaokun@...ilicon.com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"huanglingyan (A)" <huanglingyan2@...wei.com>,
"steve.capper@....com" <steve.capper@....com>
Subject: RE: [PATCH] arm64: do_csum: implement accelerated scalar version
...
> > ptr = (u64 *)(buff - offset);
> > shift = offset * 8;
> >
> > /*
> > * Head: zero out any excess leading bytes. Shifting back by the same
> > * amount should be at least as fast as any other way of handling the
> > * odd/even alignment, and means we can ignore it until the very end.
> > */
> > data = *ptr++;
> > #ifdef __LITTLE_ENDIAN
> > data = (data >> shift) << shift;
> > #else
> > data = (data << shift) >> shift;
> > #endif
I suspect that
#ifdef __LITTLE_ENDIAN
data &= ~0ull << shift;
#else
data &= ~0ull >> shift;
#endif
is likely to be better.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists