[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1299193d-e9dd-4560-7c95-39692df6e5a3@novek.ru>
Date: Fri, 29 Jan 2021 17:36:20 +0000
From: Vadim Fedorenko <vfedorenko@...ek.ru>
To: David Howells <dhowells@...hat.com>
Cc: syzbot+df400f2f24a1677cd7e0@...kaller.appspotmail.com,
netdev@...r.kernel.org, linux-afs@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] rxrpc: Fix deadlock around release of dst cached on
udp tunnel
On 29.01.2021 17:30, Vadim Fedorenko wrote:
> On 29.01.2021 16:44, David Howells wrote:
>> AF_RXRPC sockets use UDP ports in encap mode. This causes socket and dst
>> from an incoming packet to get stolen and attached to the UDP socket from
>> whence it is leaked when that socket is closed.
>>
>> When a network namespace is removed, the wait for dst records to be cleaned
>> up happens before the cleanup of the rxrpc and UDP socket, meaning that the
>> wait never finishes.
>>
>> Fix this by moving the rxrpc (and, by dependence, the afs) private
>> per-network namespace registrations to the device group rather than subsys
>> group. This allows cached rxrpc local endpoints to be cleared and their
>> UDP sockets closed before we try waiting for the dst records.
>>
>> The symptom is that lines looking like the following:
>>
>> unregister_netdevice: waiting for lo to become free
>>
>> get emitted at regular intervals after running something like the
>> referenced syzbot test.
>>
>> Thanks to Vadim for tracking this down and work out the fix.
>
> You missed the call to dst_release(sk->sk_rx_dst) in rxrpc_sock_destructor.
> Without it we are still leaking the dst.
I mean this part:
diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index 0a2f481..3c0635e 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -833,6 +833,7 @@ static void rxrpc_sock_destructor(struct sock *sk)
_enter("%p", sk);
rxrpc_purge_queue(&sk->sk_receive_queue);
+ dst_destroy(sk->sk_rx_dst);
WARN_ON(refcount_read(&sk->sk_wmem_alloc));
WARN_ON(!sk_unhashed(sk));
Powered by blists - more mailing lists