lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <91005d44-8a51-4c6d-9f5c-d5951d92f7c5@openvpn.net>
Date: Wed, 6 Mar 2024 17:03:10 +0100
From: Antonio Quartulli <antonio@...nvpn.net>
To: Andrew Lunn <andrew@...n.ch>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
 Sergey Ryazanov <ryazanov.s.a@...il.com>, Paolo Abeni <pabeni@...hat.com>,
 Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH net-next v2 06/22] ovpn: introduce the ovpn_peer object

On 04/03/2024 23:56, Andrew Lunn wrote:
>> +	ret = ptr_ring_init(&peer->tx_ring, OVPN_QUEUE_LEN, GFP_KERNEL);
>> +	if (ret < 0) {
>> +		netdev_err(ovpn->dev, "%s: cannot allocate TX ring\n", __func__);
>> +		goto err_dst_cache;
>> +	}
>> +
>> +	ret = ptr_ring_init(&peer->rx_ring, OVPN_QUEUE_LEN, GFP_KERNEL);
>> +	if (ret < 0) {
>> +		netdev_err(ovpn->dev, "%s: cannot allocate RX ring\n", __func__);
>> +		goto err_tx_ring;
>> +	}
>> +
>> +	ret = ptr_ring_init(&peer->netif_rx_ring, OVPN_QUEUE_LEN, GFP_KERNEL);
>> +	if (ret < 0) {
>> +		netdev_err(ovpn->dev, "%s: cannot allocate NETIF RX ring\n", __func__);
>> +		goto err_rx_ring;
>> +	}
> 
> These rings are 1024 entries? 

Yes

> The real netif below also likely has
> another 1024 entry ring. Rings like this are latency. Is there a BQL
> like mechanism to actually keep the rings empty, throw packets away
> rather than queue them, because queueing them just accumulates
> latency?

No BQL is implemented yet.

> 
> So, i guess my question is, how do you avoid bufferbloat? Why do you
> actually need these rings?

This is a very good point where I might require some input/feedback.
I have not ignored the problem, but I was hoping to solve it in a future 
iteration. (all the better if we can get it out the way right now)

The reason for having these rings is to pass packets between contexts.

When packets are received from the network in softirq context, they are 
queued in the rx_ring and later processed by a dedicated worker. The 
latter also takes care of decryption, which may sleep.

The same, but symmetric, process happens for packets sent by the user to 
the device: queued in tx_ring and then encrypted by the dedicated worker.

netif_rx_ring is just a queue for NAPI.


I can definitely have a look at BQL, but feel free to drop me any 
pointer/keyword as to what I should look at.


Regards,



-- 
Antonio Quartulli
OpenVPN Inc.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ