lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZzR5i9sO-xwoJcDB@hog>
Date: Wed, 13 Nov 2024 11:03:55 +0100
From: Sabrina Dubroca <sd@...asysnail.net>
To: Sergey Ryazanov <ryazanov.s.a@...il.com>
Cc: Antonio Quartulli <antonio@...nvpn.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	Donald Hunter <donald.hunter@...il.com>,
	Shuah Khan <shuah@...nel.org>, Andrew Lunn <andrew@...n.ch>,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-kselftest@...r.kernel.org
Subject: Re: [PATCH net-next v11 06/23] ovpn: introduce the ovpn_peer object

2024-11-13, 03:37:13 +0200, Sergey Ryazanov wrote:
> On 12.11.2024 19:31, Sabrina Dubroca wrote:
> > 2024-11-10, 15:38:27 +0200, Sergey Ryazanov wrote:
> > > On 29.10.2024 12:47, Antonio Quartulli wrote:
> > > > An ovpn_peer object holds the whole status of a remote peer
> > > > (regardless whether it is a server or a client).
> > > > 
> > > > This includes status for crypto, tx/rx buffers, napi, etc.
> > > > 
> > > > Only support for one peer is introduced (P2P mode).
> > > > Multi peer support is introduced with a later patch.
> > > 
> > > Reviewing the peer creation/destroying code I came to a generic question.
> > > Did you consider keeping a single P2P peer in the peers table as well?
> > > 
> > > Looks like such approach can greatly simply the code by dropping all these
> > > 'switch (ovpn->mode)' checks and implementing a unified peer management. The
> > > 'peer' field in the main private data structure can be kept to accelerate
> > > lookups, still using peers table for management tasks like removing all the
> > > peers on the interface teardown.
> > 
> > It would save a few 'switch(mode)', but force every client to allocate
> > the hashtable for no reason at all. That tradeoff doesn't look very
> > beneficial to me, the P2P-specific code is really simple. And if you
> > keep ovpn->peer to make lookups faster, you're not removing that many
> > 'switch(mode)'.
> 
> Looking at the done review, I can retrospectively conclude that I personally
> do not like short 'switch' statements and special handlers :)
> 
> Seriously, this module has a highest density of switches per KLOC from what
> I have seen before and a major part of it dedicated to handle the special
> case of P2P connection.

I think it's fine. Either way there will be two implementations of
whatever mode-dependent operation needs to be done. switch doesn't
make it more complex than an ops structure.

If you're reading the current version and find ovpn_peer_add, you see
directly that it'll do either ovpn_peer_add_mp or
ovpn_peer_add_p2p. With an ops structure, you'd have a call to
ovpn->ops->peer_add, and you'd have to look up all possible ops
structures to know that it can be either ovpn_peer_add_mp or
ovpn_peer_add_p2p. If there's an undefined number of implementations
living in different modules (like net_device_ops, or L4 protocols),
you don't have a choice.

xfrm went the opposite way to what you're proposing a few years ago
(see commit 0c620e97b349 ("xfrm: remove output indirection from
xfrm_mode") and others), and it made the code simpler.


> What together look too unusual, so it feels like a
> flaw in the design.

I don't think it's a flaw in the design, maybe just different needs
from other code you've seen (but similar in some ways to xfrm).

> I racked my brains to come up with a better solution and
> failed. So I took a different approach, inviting people to discuss item
> pieces of the code to find a solution collectively or to realize that there
> is no better solution for now.

Sure. And I think there is no better solution, so I'm answering this
thread to say that.

> The problem is that all these hash tables become inefficient with the single
> entry (P2P case). I was thinking about allocating a table with a single bin,
> but it still requires hash function run to access the indexed entry.

And the current implementation relies on fixed-size hashtables
(hash_for_each_safe -> HASH_SIZE -> ARRAY_SIZE -> sizeof).

> And back to the hashtable(s) size for the MP mode. 8k-bins table looks a
> good choice for a normal server with 1-2Gb uplink serving up to 1k
> connections. But it sill unclear, how this choice can affect installations
> with a bigger number of connections? Or is this module applicable for
> embedded solutions? E.g. running a couple of VPN servers on a home router
> with a few actual connections looks like a waste of RAM. I was about to
> suggest to use rhashtable due to its dynamic sizing feature, but the module
> needs three tables. Any better idea?

For this initial implementation I think it's fine. Sure, converting to
rhashtable (or some other type of dynamically-sized hashtable, if
rhashtable doesn't fit) in the future would make sense. But I don't
think it's necessary to get the patches into net-next.

-- 
Sabrina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ