[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e543a3de-44f1-4a2d-90ef-1786e222f0d8@gmail.com>
Date: Wed, 13 Nov 2024 03:37:13 +0200
From: Sergey Ryazanov <ryazanov.s.a@...il.com>
To: Sabrina Dubroca <sd@...asysnail.net>,
Antonio Quartulli <antonio@...nvpn.net>
Cc: Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Donald Hunter <donald.hunter@...il.com>,
Shuah Khan <shuah@...nel.org>, Andrew Lunn <andrew@...n.ch>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH net-next v11 06/23] ovpn: introduce the ovpn_peer object
On 12.11.2024 19:31, Sabrina Dubroca wrote:
> 2024-11-10, 15:38:27 +0200, Sergey Ryazanov wrote:
>> On 29.10.2024 12:47, Antonio Quartulli wrote:
>>> An ovpn_peer object holds the whole status of a remote peer
>>> (regardless whether it is a server or a client).
>>>
>>> This includes status for crypto, tx/rx buffers, napi, etc.
>>>
>>> Only support for one peer is introduced (P2P mode).
>>> Multi peer support is introduced with a later patch.
>>
>> Reviewing the peer creation/destroying code I came to a generic question.
>> Did you consider keeping a single P2P peer in the peers table as well?
>>
>> Looks like such approach can greatly simply the code by dropping all these
>> 'switch (ovpn->mode)' checks and implementing a unified peer management. The
>> 'peer' field in the main private data structure can be kept to accelerate
>> lookups, still using peers table for management tasks like removing all the
>> peers on the interface teardown.
>
> It would save a few 'switch(mode)', but force every client to allocate
> the hashtable for no reason at all. That tradeoff doesn't look very
> beneficial to me, the P2P-specific code is really simple. And if you
> keep ovpn->peer to make lookups faster, you're not removing that many
> 'switch(mode)'.
Looking at the done review, I can retrospectively conclude that I
personally do not like short 'switch' statements and special handlers :)
Seriously, this module has a highest density of switches per KLOC from
what I have seen before and a major part of it dedicated to handle the
special case of P2P connection. What together look too unusual, so it
feels like a flaw in the design. I racked my brains to come up with a
better solution and failed. So I took a different approach, inviting
people to discuss item pieces of the code to find a solution
collectively or to realize that there is no better solution for now.
The problem is that all these hash tables become inefficient with the
single entry (P2P case). I was thinking about allocating a table with a
single bin, but it still requires hash function run to access the
indexed entry.
And back to the hashtable(s) size for the MP mode. 8k-bins table looks a
good choice for a normal server with 1-2Gb uplink serving up to 1k
connections. But it sill unclear, how this choice can affect
installations with a bigger number of connections? Or is this module
applicable for embedded solutions? E.g. running a couple of VPN servers
on a home router with a few actual connections looks like a waste of
RAM. I was about to suggest to use rhashtable due to its dynamic sizing
feature, but the module needs three tables. Any better idea?
--
Sergey
Powered by blists - more mailing lists