[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f98d9f01-f9c9-4990-ad51-aa46b77ef63d@lunn.ch>
Date: Wed, 6 Mar 2024 20:23:23 +0100
From: Andrew Lunn <andrew@...n.ch>
To: Antonio Quartulli <antonio@...nvpn.net>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
Sergey Ryazanov <ryazanov.s.a@...il.com>,
Paolo Abeni <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH net-next v2 06/22] ovpn: introduce the ovpn_peer object
> This is a very good point where I might require some input/feedback.
> I have not ignored the problem, but I was hoping to solve it in a future
> iteration. (all the better if we can get it out the way right now)
>
> The reason for having these rings is to pass packets between contexts.
>
> When packets are received from the network in softirq context, they are
> queued in the rx_ring and later processed by a dedicated worker. The latter
> also takes care of decryption, which may sleep.
>
> The same, but symmetric, process happens for packets sent by the user to the
> device: queued in tx_ring and then encrypted by the dedicated worker.
>
> netif_rx_ring is just a queue for NAPI.
>
>
> I can definitely have a look at BQL, but feel free to drop me any
> pointer/keyword as to what I should look at.
Do you have any measurements about the average and maximum fill size
of these queues? If you could make the rings smaller, the whole
question of latency and bufferbloat disappears. NAPI tends to deal
with up to 64 packets. So could you make these rings 128 in size?
Andrew
Powered by blists - more mailing lists