lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d4fdbb5-7c7f-9331-187e-14c09dd1c18d@arm.com>
Date:   Wed, 15 May 2019 11:57:56 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     David Laight <David.Laight@...LAB.COM>,
        'Will Deacon' <will.deacon@....com>
Cc:     Zhangshaokun <zhangshaokun@...ilicon.com>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
        "huanglingyan (A)" <huanglingyan2@...wei.com>,
        "steve.capper@....com" <steve.capper@....com>
Subject: Re: [PATCH] arm64: do_csum: implement accelerated scalar version

On 15/05/2019 11:15, David Laight wrote:
> ...
>>> 	ptr = (u64 *)(buff - offset);
>>> 	shift = offset * 8;
>>>
>>> 	/*
>>> 	 * Head: zero out any excess leading bytes. Shifting back by the same
>>> 	 * amount should be at least as fast as any other way of handling the
>>> 	 * odd/even alignment, and means we can ignore it until the very end.
>>> 	 */
>>> 	data = *ptr++;
>>> #ifdef __LITTLE_ENDIAN
>>> 	data = (data >> shift) << shift;
>>> #else
>>> 	data = (data << shift) >> shift;
>>> #endif
> 
> I suspect that
> #ifdef __LITTLE_ENDIAN
> 	data &= ~0ull << shift;
> #else
> 	data &= ~0ull >> shift;
> #endif
> is likely to be better.

Out of interest, better in which respects? For the A64 ISA at least, 
that would take 3 instructions plus an additional scratch register, e.g.:

	MOV	x2, #~0
	LSL	x2, x2, x1
	AND	x0, x0, x1

(alternatively "AND x0, x0, x1 LSL x2" to save 4 bytes of code, but that 
will typically take as many cycles if not more than just pipelining the 
two 'simple' ALU instructions)

Whereas the original is just two shift instruction in-place.

	LSR	x0, x0, x1
	LSL	x0, x0, x1

If the operation were repeated, the constant generation could certainly 
be amortised over multiple subsequent ANDs for a net win, but that isn't 
the case here.

Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ