[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230817083025.2e8047fa@kernel.org>
Date: Thu, 17 Aug 2023 08:30:25 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Wander Lairson Costa <wander@...hat.com>,
Yan Zhai <yan@...udflare.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <martin.lau@...ux.dev>
Subject: Re: [RFC PATCH net-next 0/2] net: Use SMP threads for backlog NAPI.
On Thu, 17 Aug 2023 15:16:12 +0200 Sebastian Andrzej Siewior wrote:
> I've been looking at veth. In the xdp case it has its own NAPI instance.
> In the non-xdp it uses backlog. This should be called from
> ndo_start_xmit and user's write() so BH is off and interrupts are
> enabled at this point and it should be kind of rate-limited. Couldn't we
> bypass backlog in this case and deliver the packet directly to the
> stack?
The backlog in veth eats measurable percentage points of RPS of real
workloads, and I think number of people looked at getting rid of it.
So worthy goal for sure, but may not be a trivial fix.
To my knowledge the two main problems are:
- we don't want to charge the sending application the processing for
both "sides" of the connection and all the switching costs.
- we may get an AA deadlock if the packet ends up looping in any way.
Or at least that's what I remember the problem being at 8am in the
morning :) Adding Daniel and Martin to CC, Paolo would also know this
better than me but I think he's AFK for the rest of the week.
Powered by blists - more mailing lists