lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CY4PR0401MB36524B348358B23A8DFB741AC3420@CY4PR0401MB3652.namprd04.prod.outlook.com>
Date:   Wed, 12 Aug 2020 10:45:00 +0000
From:   "Van Leeuwen, Pascal" <pvanleeuwen@...bus.com>
To:     Sabrina Dubroca <sd@...asysnail.net>,
        Scott Dial <scott@...ttdial.com>
CC:     "linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
        Ryan Cox <ryan_cox@....edu>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Antoine Tenart <antoine.tenart@...tlin.com>,
        "ebiggers@...gle.com" <ebiggers@...gle.com>
Subject: RE: Severe performance regression in "net: macsec: preserve ingress
 frame ordering"

> -----Original Message-----
> From: linux-crypto-owner@...r.kernel.org <linux-crypto-owner@...r.kernel.org> On Behalf Of Sabrina Dubroca
> Sent: Wednesday, August 12, 2020 12:05 PM
> To: Scott Dial <scott@...ttdial.com>
> Cc: linux-crypto@...r.kernel.org; Ryan Cox <ryan_cox@....edu>; netdev@...r.kernel.org; davem@...emloft.net; Antoine Tenart
> <antoine.tenart@...tlin.com>; ebiggers@...gle.com
> Subject: Re: Severe performance regression in "net: macsec: preserve ingress frame ordering"
>
> <<< External Email >>>
> 2020-08-10, 12:09:40 -0400, Scott Dial wrote:
> > On 8/10/2020 9:34 AM, Sabrina Dubroca wrote:
> > > [adding the linux-crypto list]
> > >
> > > 2020-08-06, 23:48:16 -0400, Scott Dial wrote:
> > >> On 8/6/2020 5:11 PM, Ryan Cox wrote:
> > >>> With 5.7 I get:
> > >>> * 9.90 Gb/s with no macsec at all
> > >>> * 1.80 Gb/s with macsec WITHOUT encryption
> > >>> * 1.00 Gb/s (sometimes, but often less) with macsec WITH encryption
> > >>>
> > >>> With 5.7 but with ab046a5d4be4c90a3952a0eae75617b49c0cb01b reverted, I get:
> > >>> * 9.90 Gb/s with no macsec at all
> > >>> * 7.33 Gb/s with macsec WITHOUT encryption
> > >>> * 9.83 Gb/s with macsec WITH encryption
> > >>>
> > >>> On tests where performance is bad (including macsec without encryption),
> > >>> iperf3 is at 100% CPU usage.  I was able to run it under `perf record`on
> > >>> iperf3 in a number of the tests but, unfortunately, I have had trouble
> > >>> compiling perf for my own 5.7 compilations (definitely PEBKAC).  If it
> > >>> would be useful I can work on fixing the perf compilation issues.
> > >>
> > >> For certain, you are measuring the difference between AES-NI doing
> > >> gcm(aes) and gcm_base(ctr(aes-aesni),ghash-generic). Specifically, the
> > >> hotspot is ghash-generic's implementation of ghash_update() function.
> > >> I appreciate your testing because I was limited in my ability to test
> > >> beyond 1Gb/s.
> > >>
> > >> The aes-aesni driver is smart enough to use the FPU if it's not busy and
> > >> fallback to the CPU otherwise. Unfortunately, the ghash-clmulni driver
> > >> does not have that kind of logic in it and only provides an async version,
> > >> so we are forced to use the ghash-generic implementation, which is a pure
> > >> CPU implementation. The ideal would be for aesni_intel to provide a
> > >> synchronous version of gcm(aes) that fell back to the CPU if the FPU is
> > >> busy.
> > >> I don't know if the crypto maintainers would be open to such a change, but
> > >> if the choice was between reverting and patching the crypto code, then I
> > >> would work on patching the crypto code.
> > >
> > > To the crypto folks, a bit of context: Scott wrote commit ab046a5d4be4
> > > ("net: macsec: preserve ingress frame ordering"), which made MACsec
> > > use gcm(aes) with CRYPTO_ALG_ASYNC. This prevents out of order
> > > decryption, but reduces performance. We'd like to restore performance
> > > on systems where the FPU is available without breaking MACsec for
> > > systems where the FPU is often busy.
> > >
> > > A quick and dirty alternative might be to let the administrator decide
> > > if they're ok with some out of order. Maybe they know that their FPU
> > > will be mostly idle so it won't even be an issue (or maybe the
> > > opposite, ie keep the fast default and let admins fix their setups
> > > with an extra flag).
> >
> > I can appreciate favoring performance over correctness as practical
> > concern, but I'd suggest that the out-of-order decryption *is* a
> > performance concern as well. We can debate realness of my workload, but
> > even in Ryan's tests on an otherwise idle server, he showed 0.07% of the
> > frames needed to be dispatched to cryptd, and that for whatever reason
> > it's more often with encryption disabled, which correlates to his
> > decrease in throughput (9.83 Gb/s to 7.33 Gb/s, and 9.19 Gb/s to 6.00
> > Gb/s), perhaps causing exponential backoff from TCP retries. I can
> > resurrect my test setup, but my numbers were worse than Ryan's.
> >
> > In any case, I counted 18 implementations of HW accelerated gcm(aes) in
> > the kernel, with 3 of those implementations are in arch (x86, arm64, and
> > s390) and the rest are crypto device drivers. Of all those
> > implementations, the AES-NI implementation is the only one that
> > dispatches to cryptd (via code in cypto/simd.c). AFAICT, every other
> > implementation of gcm(aes) is synchronous, but they would require closer
> > inspection to be certain.
>
> I randomly picked 2 of them (chcr and inside-secure), and they both
> set CRYPTO_ALG_ASYNC, so I guess not.
>
You can expect most, if not all, HW accelerated crypto to by ASYNC. This is
important to achieve decent performance, as going through some external
(to the CPU) accelerator incurs significant latency.  (Note that I don't consider
CPU extensions like AES-NI to be "HW accelerated", anything that uses only
CPU instructions is "just" software in my world). Which implies you need to
pipeline requests to unleash its true performance. So if you need high
throughput crypto with low CPU utilization, you should write your
application appropriately, and not unnecessarily serialize your requests.

With networking protocols you often also have a requirement to minimize
packet reordering, so I understand it's a careful balance. But it is possible
to serialize the important stuff and still do the crypto out-of-order, which
would be really beneficial on _some_ platforms (which have HW crypto
acceleration but no such CPU extensions) at least.

> > So, I'd like to focus on what we can do to
> > improve crypto/simd.c to provide a synchronous implementation of
> > gcm(aes) for AES-NI when possible, which is the vast majority of the time.
> >
> > I would be interested in proposing a change to improve this issue, but
> > I'm not sure the direction that the maintainers of this code would
> > prefer. Since these changes to the crypto API are fairly recent, there
> > may be context that I am not aware of. However, I think it would be
> > straight-forward to add another API to crypto/simd.c that allocated sync
> > algorithms, and I would be willing to do the work.
> >
> > The only challenge I see in implementing such a change is deciding how
> > to select a fallback algorithm. The most flexible solution would be to
> > call crypto_alloc_aead with CRYPTO_ALG_ASYNC during the init to pick the
> > "best" fallback (in case there is alternative HW offloading available),
> > but that would almost certainly pick itself and it's not obvious to me
> > how to avoid that.
>
> It's probably possible to add a PURE_SOFTWARE or whatever flag and
> request one of those algorithms for the fallback.
>
Forcing the use of sync algorithms only would be detrimental to platforms
that do not have CPU accelerated crypto, but do have HW acceleration
for crypto external to the CPU. I understand it's much easier to implement,
but that is just being lazy IMHO. For bulk crypto of relatively independent
blocks (networking packets, disk sectors), ASYNC should always be preferred.

> > On the other hand, the caller to the new API could
> > explicitly declare a fallback algorithm (e.g.,
> > "gcm_base(ctr(aes-aesni),ghash-generic)"), which probably is the correct
> > answer anyways --
>
> I would try to avoid that, it seems too error-prone to me.
>
> > what are the chances that there is multiple HW
> > offloads for gcm(aes)? In that case, a possible API would be:
> > int simd_register_aeads_compat_sync(struct aead_alg *algs,
> >                                     char **fallback_algs,
> >                                     int count,
> >             struct simd_aead_alg **simd_algs);
> >
> > Beyond MACsec, it's worth noting that the mac80211 code for AES-GCMP and
> > BIP-GMAC also use gcm(aes) in sync mode because decryption occurs in a
> > softirq, however I imagine nobody has reported an issue because the link
> > speed is typically slower and those encryption modes are still uncommon.
>
> Decent wireless cards would do the encryption in hw, no? Also, you
> can't notice a performance regression if it's never used the fast
> implementation :)
>
> --
> Sabrina

Regards,
Pascal van Leeuwen
Silicon IP Architect Multi-Protocol Engines, Rambus Security
Rambus ROTW Holding BV
+31-73 6581953

Note: The Inside Secure/Verimatrix Silicon IP team was recently acquired by Rambus.
Please be so kind to update your e-mail address book with my new e-mail address.

** This message and any attachments are for the sole use of the intended recipient(s). It may contain information that is confidential and privileged. If you are not the intended recipient of this message, you are prohibited from printing, copying, forwarding or saving it. Please delete the message and attachments and notify the sender immediately. **

Rambus Inc.<http://www.rambus.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ