lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0273cf51-fbca-453d-81da-777b9462ce3c@openvpn.net>
Date: Fri, 8 Mar 2024 16:44:01 +0100
From: Antonio Quartulli <antonio@...nvpn.net>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
 netdev@...r.kernel.org
Cc: Jakub Kicinski <kuba@...nel.org>, Sergey Ryazanov
 <ryazanov.s.a@...il.com>, Paolo Abeni <pabeni@...hat.com>,
 Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH net-next v2 08/22] ovpn: implement basic TX path (UDP)

Hi Toke,

On 08/03/2024 16:31, Toke Høiland-Jørgensen wrote:
> Antonio Quartulli <antonio@...nvpn.net> writes:
> 
>> +/* send skb to connected peer, if any */
>> +static void ovpn_queue_skb(struct ovpn_struct *ovpn, struct sk_buff *skb, struct ovpn_peer *peer)
>> +{
>> +	int ret;
>> +
>> +	if (likely(!peer))
>> +		/* retrieve peer serving the destination IP of this packet */
>> +		peer = ovpn_peer_lookup_by_dst(ovpn, skb);
>> +	if (unlikely(!peer)) {
>> +		net_dbg_ratelimited("%s: no peer to send data to\n", ovpn->dev->name);
>> +		goto drop;
>> +	}
>> +
>> +	ret = ptr_ring_produce_bh(&peer->tx_ring, skb);
>> +	if (unlikely(ret < 0)) {
>> +		net_err_ratelimited("%s: cannot queue packet to TX ring\n", peer->ovpn->dev->name);
>> +		goto drop;
>> +	}
>> +
>> +	if (!queue_work(ovpn->crypto_wq, &peer->encrypt_work))
>> +		ovpn_peer_put(peer);
>> +
>> +	return;
>> +drop:
>> +	if (peer)
>> +		ovpn_peer_put(peer);
>> +	kfree_skb_list(skb);
>> +}
> 
> So this puts packets on a per-peer 1024-packet FIFO queue with no
> backpressure? That sounds like a pretty terrible bufferbloat situation.
> Did you do any kind of latency-under-load testing of this, such as
> running the RRUL test[0] through it?

Thanks for pointing this out.

Andrew Lunn just raised a similar point about these rings being 
potential bufferbloat pitfalls.

And I totally agree.

I haven't performed any specific test, but I have already seen latency 
bumping here and there under heavy load.

Andrew suggested at least reducing rings size to something like 128 and 
then looking at BQL.

Do you have any hint as to what may make sense for a first 
implementation, balancing complexity and good results?


Thanks a lot.

Regards,



> 
> -Toke
> 
> [0] https://flent.org/tests.html#the-realtime-response-under-load-rrul-test
> 

-- 
Antonio Quartulli
OpenVPN Inc.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ