lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 18 Oct 2016 16:24:07 +0200
From:   Johannes Berg <johannes@...solutions.net>
To:     Ard Biesheuvel <ard.biesheuvel@...aro.org>
Cc:     "<linux-wireless@...r.kernel.org>" <linux-wireless@...r.kernel.org>,
        "<netdev@...r.kernel.org>" <netdev@...r.kernel.org>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Jouni Malinen <j@...fi>, Andy Lutomirski <luto@...capital.net>
Subject: Re: [RFC PATCH 2/2] mac80211: aes_ccm: cache AEAD request
 structures per CPU

On Tue, 2016-10-18 at 15:18 +0100, Ard Biesheuvel wrote:
> 
> > Hmm. Is it really worth having a per-CPU variable for each possible
> > key? You could have a large number of those (typically three when
> > you're a client on an AP, and 1 + 1 for each client when you're the
> > AP).

2 + 1 for each client, actually, since you have 2 GTKs present in the
"steady state"; not a big difference though.

> > Would it be so bad to have to set the TFM every time (if that's
> > even possible), and just have a single per-CPU cache?

> That would be preferred, yes. The only snag here is that
> crypto_alloc_aead() is not guaranteed to return the same algo every
> time, which means the request size is not guaranteed to be the same
> either. This is a rare corner case, of course, but it needs to be
> dealt with regardless

Ah, good point. Well I guess you could allocate a bigger one it if it's
too small, but then we'd have to recalculate the size all the time
(which we already did anyway, but saving something else would be good).
Then we'd be close to just having a per-CPU memory block cache though.

johannes

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ