[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fab5f9f2abdc478702fc7f9a831de418a1234e38.camel@redhat.com>
Date: Sat, 23 Jan 2021 08:10:37 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
mptcp@...ts.01.org, Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH v2 net-next 5/5] mptcp: implement delegated actions
On Fri, 2021-01-22 at 15:23 -0800, Jakub Kicinski wrote:
> On Fri, 22 Jan 2021 09:25:07 +0100 Paolo Abeni wrote:
> > > Do you need it because of locking?
> >
> > This infrastructure is used to avoid the workqueue usage in the MPTCP
> > receive path (to push pending data). With many mptcp connections
> > established that would be very bad for tput and latency. This
> > infrastructure is not strictly needed from a functinal PoV, but I was
> > unable to find any other way to avoid the workqueue usage.
>
> But it is due to locking or is it not? Because you're running the
> callback in the same context, so otherwise why not just call the
> function directly? Can't be batching, it's after GRO so we won't
> batch much more.
Thank you for the feedback.
Let me try to elaborate a bit more on this. When processing the input
packet (MPTCP data ack) on the MPTCP subflow A, under the subflow A
socket lock, we possibly need to push some data via a different subflow
B - depending on the MPTCP packet scheduler decision. We can't try to
acquire the B subflow socket lock due to ABBA deadlock.
Either the workqueue usage and this infra avoid the deadlock breaking
the locks chain.
Should not have any bad iteraction with threaded NAPI nor busy polling,
but I don't have experimented yet. Placing that on my TODO list.
Thanks!
Paolo
Powered by blists - more mailing lists