lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241206115042.4e98ff8b@elisabeth>
Date: Fri, 6 Dec 2024 11:50:42 +0100
From: Stefano Brivio <sbrivio@...hat.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: Willem de Bruijn <willemdebruijn.kernel@...il.com>, Eric Dumazet
 <edumazet@...gle.com>, netdev@...r.kernel.org, Kuniyuki Iwashima
 <kuniyu@...zon.com>, Mike Manning <mvrmanning@...il.com>, David Gibson
 <david@...son.dropbear.id.au>, Paul Holzinger <pholzing@...hat.com>, Philo
 Lu <lulie@...ux.alibaba.com>, Cambda Zhu <cambda@...ux.alibaba.com>, Fred
 Chen <fred.cc@...baba-inc.com>, Yubing Qiu
 <yubing.qiuyubing@...baba-inc.com>
Subject: Re: [PATCH net-next 2/2] datagram, udp: Set local address and
 rehash socket atomically against lookup

On Thu, 5 Dec 2024 17:53:33 +0100
Paolo Abeni <pabeni@...hat.com> wrote:

> On 12/5/24 16:58, Stefano Brivio wrote:
> > On Thu, 5 Dec 2024 10:30:14 +0100
> > Paolo Abeni <pabeni@...hat.com> wrote:
> >   
> >> On 12/4/24 23:12, Stefano Brivio wrote:
> >>  
> >>> [...]
> >>>
> >>> To fix this, replace the rehash operation by a set_rcv_saddr()
> >>> callback holding the spinlock on the primary hash chain, just like
> >>> the rehash operation used to do, but also setting the address (via
> >>> inet_update_saddr(), moved to headers) while holding the spinlock.
> >>>
> >>> To make this atomic against the lookup operation, also acquire the
> >>> spinlock on the primary chain there.    
> >>
> >> I'm sorry for the late feedback.
> >>
> >> I'm concerned by the unconditional spinlock in __udp4_lib_lookup(). I
> >> fear it could cause performance regressions in different workloads:
> >> heavy UDP unicast flow, or even TCP over UDP tunnel when the NIC
> >> supports RX offload for the relevant UDP tunnel protocol.
> >>
> >> In the first case there will be an additional atomic operation per packet.  
> > 
> > So, I've been looking into this a bit, and request-response rates with
> > neper's udp_rr (https://github.com/google/neper/blob/master/udp_rr.c)
> > for a client/server pair via loopback interface are the same before and
> > after this patch.
> > 
> > The reason is, I suppose, that the only contention on that spinlock is
> > the "intended" one, that is, between connect() and lookup.
> > 
> > Then I moved on to bulk flows, with socat or iperf3. But there (and
> > that's the whole point of this fix) we have connected sockets, and once
> > they are connected, we switch to early demux, which is not affected by
> > this patch.
> > 
> > In the end, I don't think this will affect "regular", bulk unicast
> > flows, because applications using them will typically connect sockets,
> > and we'll switch to early demux right away.
> > 
> > This lookup is not exactly "slow path", but it's not fast path either.  
> 
> Some (most ?) quick server implementations don't use connect.

Assuming you mean QUIC, fair enough, I see your point.

> DNS servers will be affected, and will see contention on the hash lock

At the same time, clients (not just DNS) are surely affected by bogus
ICMP Port Unreachable messages, if remote, or ECONNREFUSED on send()
(!), if local.

If (presumed) contention is so relevant, I would have expected that
somebody could indicate a benchmark for it. As I mentioned, udp_rr from
'neper' didn't really show any difference for me. Anyway, fine, let's
assume that it's an issue.

> Even deployment using SO_REUSEPORT with a per-cpu UDP socket will see
> contention. This latter case would be pretty bad, as it's supposed to
> scale linearly.

Okay, I guess we could observe a bigger impact in this case (this is
something I didn't try).

> I really think the hash lock during lookup is a no go.
> 
> >> In the latter the spin_lock will be contended with multiple concurrent
> >> TCP over UDP tunnel flows: the NIC with UDP tunnel offload can use the
> >> inner header to compute the RX hash, and use different rx queues for
> >> such flows.
> >>
> >> The GRO stage will perform UDP tunnel socket lookup and will contend the
> >> bucket lock.  
> > 
> > In this case (I couldn't find out yet), aren't sockets connected? I
> > would expect that we switch to the early demux path relatively soon for
> > anything that needs to have somehow high throughput.  
> 
> The UDP socket backing tunnels is unconnected and can receive data from
> multiple other tunnel endpoints.
> 
> > And if we don't, probably the more reasonable alternative would be to
> > "fix" that, rather than keeping this relatively common case broken.
> > 
> > Do you have a benchmark or something I can run?  
> 
> I'm sorry, but I don't have anything handy. If you have a NIC
> implementing i.e. vxlan H/W offload you should be able to observe
> contention with multiple simultaneus TCP over vxlan flows targeting an
> endpoint on top of it.

Thanks for the idea, but no, I don't have one right now.

> >>> This results in some awkwardness at a caller site, specifically
> >>> sock_bindtoindex_locked(), where we really just need to rehash the
> >>> socket without changing its address. With the new operation, we now
> >>> need to forcibly set the current address again.
> >>>
> >>> On the other hand, this appears more elegant than alternatives such
> >>> as fetching the spinlock reference in ip4_datagram_connect() and
> >>> ip6_datagram_conect(), and keeping the rehash operation around for
> >>> a single user also seems a tad overkill.    
> >>
> >> Would such option require the same additional lock at lookup time?  
> > 
> > Yes, it's conceptually the same, we would pretty much just move code
> > around.
> > 
> > I've been thinking about possible alternatives but they all involve a
> > much bigger rework. One idea could be that we RCU-connect() sockets,
> > instead of just having the hash table insertion under RCU. That is, as
> > long as we're in the grace period, the lookup would still see the old
> > receive address.  
> 
> I'm wondering if the issue could be solved (almost) entirely in the
> rehash callback?!? if the rehash happens on connect and the the socket
> does not have hash4 yet (it's not a reconnect) do the l4 hashing before
> everything else.

So, yes, that's actually the first thing I tried: do the hashing (any
hash) before setting the address (I guess that's what you mean by
"everything else").

If you take this series, and drop the changes in __udp4_lib_lookup(), I
guess that would match what you suggest.

With udp_lib_set_rcv_saddr() instead of a "rehash" callback you can see
pretty easily that hashes are updated first, and then we set the
receiving address.

It doesn't work because the socket does have a receiving address (and
hashes) already: it's 0.0.0.0. So we're just moving the race condition.
I don't think we can really change that part.

Note that this issue occurs with and without four-tuple hashes (I
actually posted the original fix before they were introduced).

> Incoming packets should match the l4 hash and reach the socket even
> while later updating the other hash(es).

...to obtain this kind of outcome, I'm trying to keep the old hash
around until the new hash is there *and* we changed the address.

For simplicity, I cut out four-tuple hashes, and, in the new
udp_lib_set_rcv_saddr(), I changed RCU calls so that it should always
be the case... but that doesn't help either for some reason.

I wonder if you have some idea as to whether that's a viable approach
at all, and if there's something particular I should observe while
implementing it.

-- 
Stefano


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ