[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c1601a03-0643-41ec-a91c-4eac5d26e693@redhat.com>
Date: Thu, 5 Dec 2024 17:53:33 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Stefano Brivio <sbrivio@...hat.com>
Cc: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Eric Dumazet <edumazet@...gle.com>, netdev@...r.kernel.org,
Kuniyuki Iwashima <kuniyu@...zon.com>, Mike Manning <mvrmanning@...il.com>,
David Gibson <david@...son.dropbear.id.au>,
Paul Holzinger <pholzing@...hat.com>, Philo Lu <lulie@...ux.alibaba.com>,
Cambda Zhu <cambda@...ux.alibaba.com>, Fred Chen <fred.cc@...baba-inc.com>,
Yubing Qiu <yubing.qiuyubing@...baba-inc.com>
Subject: Re: [PATCH net-next 2/2] datagram, udp: Set local address and rehash
socket atomically against lookup
On 12/5/24 16:58, Stefano Brivio wrote:
> On Thu, 5 Dec 2024 10:30:14 +0100
> Paolo Abeni <pabeni@...hat.com> wrote:
>
>> On 12/4/24 23:12, Stefano Brivio wrote:
>>
>>> [...]
>>>
>>> To fix this, replace the rehash operation by a set_rcv_saddr()
>>> callback holding the spinlock on the primary hash chain, just like
>>> the rehash operation used to do, but also setting the address (via
>>> inet_update_saddr(), moved to headers) while holding the spinlock.
>>>
>>> To make this atomic against the lookup operation, also acquire the
>>> spinlock on the primary chain there.
>>
>> I'm sorry for the late feedback.
>>
>> I'm concerned by the unconditional spinlock in __udp4_lib_lookup(). I
>> fear it could cause performance regressions in different workloads:
>> heavy UDP unicast flow, or even TCP over UDP tunnel when the NIC
>> supports RX offload for the relevant UDP tunnel protocol.
>>
>> In the first case there will be an additional atomic operation per packet.
>
> So, I've been looking into this a bit, and request-response rates with
> neper's udp_rr (https://github.com/google/neper/blob/master/udp_rr.c)
> for a client/server pair via loopback interface are the same before and
> after this patch.
>
> The reason is, I suppose, that the only contention on that spinlock is
> the "intended" one, that is, between connect() and lookup.
>
> Then I moved on to bulk flows, with socat or iperf3. But there (and
> that's the whole point of this fix) we have connected sockets, and once
> they are connected, we switch to early demux, which is not affected by
> this patch.
>
> In the end, I don't think this will affect "regular", bulk unicast
> flows, because applications using them will typically connect sockets,
> and we'll switch to early demux right away.
>
> This lookup is not exactly "slow path", but it's not fast path either.
Some (most ?) quick server implementations don't use connect.
DNS servers will be affected, and will see contention on the hash lock
Even deployment using SO_REUSEPORT with a per-cpu UDP socket will see
contention. This latter case would be pretty bad, as it's supposed to
scale linearly.
I really think the hash lock during lookup is a no go.
>> In the latter the spin_lock will be contended with multiple concurrent
>> TCP over UDP tunnel flows: the NIC with UDP tunnel offload can use the
>> inner header to compute the RX hash, and use different rx queues for
>> such flows.
>>
>> The GRO stage will perform UDP tunnel socket lookup and will contend the
>> bucket lock.
>
> In this case (I couldn't find out yet), aren't sockets connected? I
> would expect that we switch to the early demux path relatively soon for
> anything that needs to have somehow high throughput.
The UDP socket backing tunnels is unconnected and can receive data from
multiple other tunnel endpoints.
> And if we don't, probably the more reasonable alternative would be to
> "fix" that, rather than keeping this relatively common case broken.
>
> Do you have a benchmark or something I can run?
I'm sorry, but I don't have anything handy. If you have a NIC
implementing i.e. vxlan H/W offload you should be able to observe
contention with multiple simultaneus TCP over vxlan flows targeting an
endpoint on top of it.
>>> This results in some awkwardness at a caller site, specifically
>>> sock_bindtoindex_locked(), where we really just need to rehash the
>>> socket without changing its address. With the new operation, we now
>>> need to forcibly set the current address again.
>>>
>>> On the other hand, this appears more elegant than alternatives such
>>> as fetching the spinlock reference in ip4_datagram_connect() and
>>> ip6_datagram_conect(), and keeping the rehash operation around for
>>> a single user also seems a tad overkill.
>>
>> Would such option require the same additional lock at lookup time?
>
> Yes, it's conceptually the same, we would pretty much just move code
> around.
>
> I've been thinking about possible alternatives but they all involve a
> much bigger rework. One idea could be that we RCU-connect() sockets,
> instead of just having the hash table insertion under RCU. That is, as
> long as we're in the grace period, the lookup would still see the old
> receive address.
I'm wondering if the issue could be solved (almost) entirely in the
rehash callback?!? if the rehash happens on connect and the the socket
does not have hash4 yet (it's not a reconnect) do the l4 hashing before
everything else.
Incoming packets should match the l4 hash and reach the socket even
while later updating the other hash(es).
Cheers,
Paolo
Powered by blists - more mailing lists