lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Jun 2023 14:04:30 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Borislav Petkov' <bp@...en8.de>,
        Noah Goldstein <goldstein.w.n@...il.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>
CC:     "x86@...nel.org" <x86@...nel.org>,
        "edumazet@...gle.com" <edumazet@...gle.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "hpa@...or.com" <hpa@...or.com>,
        lkml <linux-kernel@...r.kernel.org>
Subject: RE: x86/csum: Remove unnecessary odd handling

From: Borislav Petkov
> Sent: 28 June 2023 10:13
> 
> + Linus who's been poking at this yesterday.
> 
> + lkml. Please always CC lkml when sending patches.
> 
> On Tue, Jun 27, 2023 at 09:06:57PM -0500, Noah Goldstein wrote:
> > The special case for odd aligned buffers is unnecessary and mostly
> > just adds overhead. Aligned buffers is the expectations, and even for
> > unaligned buffer, the only case that was helped is if the buffer was
> > 1-byte from word aligned which is ~1/7 of the cases. Overall it seems
> > highly unlikely to be worth to extra branch.
> >
> > It was left in the previous perf improvement patch because I was
> > erroneously comparing the exact output of `csum_partial(...)`, but
> > really we only need `csum_fold(csum_partial(...))` to match so its
> > safe to remove.

I'm sure I've suggested this before.
The 'odd' check was needed by an earlier implementation.

Misaligned buffers are (just about) measurably slower.
But it is pretty much noise and the extra code in the
aligned case will code more.

It is pretty much impossible to find out what the cpu is doing,
but if you do misaligned accesses to a PCIe target you can
(with suitable hardware) look at the generated TLP.

What that shows is misaligned transfers being done in 8-byte
chunks and being split into two TLP if they cross a 64 byte
(probably cache line) boundary.

It is likely that the same happens for cached accesses.

Given that the cpu can do two memory reads each clock
it isn't surprising that the checksum loop (which doesn't
even manage a read every clock) is slower by less than
one clock every cache line.

Someone might also want to use the 'arc' C version of csum_fold()
on pretty much every architecture [1].
It is:
	return (~sum - ror32(sum, 16)) >> 16;
significantly better than the x86 asm (even on more recent
cpu that don't take 2 clocks for an 'adc').

[1] arm can do a bit better because of the barrel shifter.
    sparc is slower because it has a carry flag but no rotate.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ