lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9pm4DHuBsE+hoFxnm1B5OWAZ+OyKXzeKDxHtisZpw4ebg@mail.gmail.com>
Date:   Thu, 3 Nov 2016 23:20:08 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     David Miller <davem@...emloft.net>
Cc:     Herbert Xu <herbert@...dor.apana.org.au>,
        linux-crypto@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
        Martin Willi <martin@...ongswan.org>,
        WireGuard mailing list <wireguard@...ts.zx2c4.com>,
        René van Dorst <opensource@...rst.com>
Subject: Re: [PATCH] poly1305: generic C can be faster on chips with slow
 unaligned access

Hi David,

On Thu, Nov 3, 2016 at 6:08 PM, David Miller <davem@...emloft.net> wrote:
> In any event no piece of code should be doing 32-bit word reads from
> addresses like "x + 3" without, at a very minimum, going through the
> kernel unaligned access handlers.

Excellent point. In otherwords,

    ctx->r[0] = (le32_to_cpuvp(key +  0) >> 0) & 0x3ffffff;
    ctx->r[1] = (le32_to_cpuvp(key +  3) >> 2) & 0x3ffff03;
    ctx->r[2] = (le32_to_cpuvp(key +  6) >> 4) & 0x3ffc0ff;
    ctx->r[3] = (le32_to_cpuvp(key +  9) >> 6) & 0x3f03fff;
    ctx->r[4] = (le32_to_cpuvp(key + 12) >> 8) & 0x00fffff;

should change to:

    ctx->r[0] = (le32_to_cpuvp(key +  0) >> 0) & 0x3ffffff;
    ctx->r[1] = (get_unaligned_le32(key +  3) >> 2) & 0x3ffff03;
    ctx->r[2] = (get_unaligned_le32(key +  6) >> 4) & 0x3ffc0ff;
    ctx->r[3] = (get_unaligned_le32(key +  9) >> 6) & 0x3f03fff;
    ctx->r[4] = (le32_to_cpuvp(key + 12) >> 8) & 0x00fffff;

> We know explicitly that these offsets will not be 32-bit aligned, so
> it is required that we use the helpers, or alternatively do things to
> avoid these unaligned accesses such as using temporary storage when
> the HAVE_EFFICIENT_UNALIGNED_ACCESS kconfig value is not set.

So the question is: is the clever avoidance of unaligned accesses of
the original patch faster or slower than changing the unaligned
accesses to use the helper function?

I've put a little test harness together for playing with this:

    $ git clone git://git.zx2c4.com/polybench
    $ cd polybench
    $ make run

To test with one method, do as normal. To test with the other, remove
"#define USE_FIRST_METHOD" from the source code.

@René: do you think you could retest on your MIPS32r2 hardware and
report back which is faster?

And if anybody else has other hardware and would like to try, this
could be nice.

Regards,
Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ