lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 3 May 2019 08:07:48 +0200
From:   Steffen Klassert <steffen.klassert@...unet.com>
To:     Florian Westphal <fw@...len.de>
CC:     <vakul.garg@....com>, <netdev@...r.kernel.org>
Subject: Re: [RFC HACK] xfrm: make state refcounting percpu

On Wed, Apr 24, 2019 at 12:40:23PM +0200, Florian Westphal wrote:
> I'm not sure this is a good idea to begin with, refcount
> is right next to state spinlock which is taken for both tx and rx ops,
> plus this complicates debugging quite a bit.


Hm, what would be the usecase where this could help?

The only thing that comes to my mind is a TX state
with wide selectors. In that case you might see
traffic for this state on a lot of cpus. But in
that case we have a lot of other problems too,
state lock, replay window etc. It might make more
sense to install a full state per cpu as this
would solve all the other problems too (I've
talked about that idea at the IPsec workshop).

In fact RFC 7296 allows to insert multiple SAs
with the same traffic selector, so it is possible
to install one state per cpu. We did a PoC for this
at the IETF meeting the week after the IPsec workshop.

One problem that is not solved completely is that,
from userland point of view, a SA consists of two
states (RX/TX) and this has to be symetic i.e.
both ends must have the same number of states.
So if both ends have a different number of cpus,
it is not clear how many states we should install.

We are currently discuss to extend the IKEv2 standard
so that we can negotiate the 'optimal' number of
(per cpu) SAs for a connection.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ