[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241205165830.64da6fd7@elisabeth>
Date: Thu, 5 Dec 2024 16:58:30 +0100
From: Stefano Brivio <sbrivio@...hat.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: Willem de Bruijn <willemdebruijn.kernel@...il.com>, Eric Dumazet
<edumazet@...gle.com>, netdev@...r.kernel.org, Kuniyuki Iwashima
<kuniyu@...zon.com>, Mike Manning <mvrmanning@...il.com>, David Gibson
<david@...son.dropbear.id.au>, Paul Holzinger <pholzing@...hat.com>, Philo
Lu <lulie@...ux.alibaba.com>, Cambda Zhu <cambda@...ux.alibaba.com>, Fred
Chen <fred.cc@...baba-inc.com>, Yubing Qiu
<yubing.qiuyubing@...baba-inc.com>
Subject: Re: [PATCH net-next 2/2] datagram, udp: Set local address and
rehash socket atomically against lookup
On Thu, 5 Dec 2024 10:30:14 +0100
Paolo Abeni <pabeni@...hat.com> wrote:
> On 12/4/24 23:12, Stefano Brivio wrote:
>
> > [...]
> >
> > To fix this, replace the rehash operation by a set_rcv_saddr()
> > callback holding the spinlock on the primary hash chain, just like
> > the rehash operation used to do, but also setting the address (via
> > inet_update_saddr(), moved to headers) while holding the spinlock.
> >
> > To make this atomic against the lookup operation, also acquire the
> > spinlock on the primary chain there.
>
> I'm sorry for the late feedback.
>
> I'm concerned by the unconditional spinlock in __udp4_lib_lookup(). I
> fear it could cause performance regressions in different workloads:
> heavy UDP unicast flow, or even TCP over UDP tunnel when the NIC
> supports RX offload for the relevant UDP tunnel protocol.
>
> In the first case there will be an additional atomic operation per packet.
So, I've been looking into this a bit, and request-response rates with
neper's udp_rr (https://github.com/google/neper/blob/master/udp_rr.c)
for a client/server pair via loopback interface are the same before and
after this patch.
The reason is, I suppose, that the only contention on that spinlock is
the "intended" one, that is, between connect() and lookup.
Then I moved on to bulk flows, with socat or iperf3. But there (and
that's the whole point of this fix) we have connected sockets, and once
they are connected, we switch to early demux, which is not affected by
this patch.
In the end, I don't think this will affect "regular", bulk unicast
flows, because applications using them will typically connect sockets,
and we'll switch to early demux right away.
This lookup is not exactly "slow path", but it's not fast path either.
> In the latter the spin_lock will be contended with multiple concurrent
> TCP over UDP tunnel flows: the NIC with UDP tunnel offload can use the
> inner header to compute the RX hash, and use different rx queues for
> such flows.
>
> The GRO stage will perform UDP tunnel socket lookup and will contend the
> bucket lock.
In this case (I couldn't find out yet), aren't sockets connected? I
would expect that we switch to the early demux path relatively soon for
anything that needs to have somehow high throughput.
And if we don't, probably the more reasonable alternative would be to
"fix" that, rather than keeping this relatively common case broken.
Do you have a benchmark or something I can run?
> > This results in some awkwardness at a caller site, specifically
> > sock_bindtoindex_locked(), where we really just need to rehash the
> > socket without changing its address. With the new operation, we now
> > need to forcibly set the current address again.
> >
> > On the other hand, this appears more elegant than alternatives such
> > as fetching the spinlock reference in ip4_datagram_connect() and
> > ip6_datagram_conect(), and keeping the rehash operation around for
> > a single user also seems a tad overkill.
>
> Would such option require the same additional lock at lookup time?
Yes, it's conceptually the same, we would pretty much just move code
around.
I've been thinking about possible alternatives but they all involve a
much bigger rework. One idea could be that we RCU-connect() sockets,
instead of just having the hash table insertion under RCU. That is, as
long as we're in the grace period, the lookup would still see the old
receive address.
But, especially now that we have *three* hash tables, this is extremely
involved, and perhaps would warrant a rewrite of the whole thing. Given
that we're currently breaking users, I'd rather fix this first.
Sure, things have been broken for 19 years so I guess it's okay to defer
this fix to net-next (see discussion around the RFC), but I'd still
suggest that we fix this as a first step, because the breakage is
embarrassingly obvious (see reproducers).
--
Stefano
Powered by blists - more mailing lists