[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87ilk9ftls.fsf@cloudflare.com>
Date: Mon, 24 Oct 2022 11:36:47 +0200
From: Jakub Sitnicki <jakub@...udflare.com>
To: Cong Wang <xiyou.wangcong@...il.com>
Cc: sdf@...gle.com, john.fastabend@...il.com, netdev@...r.kernel.org,
bpf@...r.kernel.org, Cong Wang <cong.wang@...edance.com>,
Martin KaFai Lau <martin.lau@...ux.dev>
Subject: Re: Lockdep warning after c0feea594e058223973db94c1c32a830c9807c86
On Sun, Oct 16, 2022 at 11:11 AM -07, Cong Wang wrote:
> On Thu, Oct 13, 2022 at 10:39:08PM +0200, Jakub Sitnicki wrote:
>> Hi Stan,
>>
>> On Wed, Oct 12, 2022 at 02:08 PM -07, sdf@...gle.com wrote:
>> > Hi John & Jakub,
>> >
>> > Upstream commit c0feea594e05 ("workqueue: don't skip lockdep work
>> > dependency in cancel_work_sync()") seems to trigger the following
>> > lockdep warning during test_prog's sockmap_listen:
>> >
>> > [ +0.003631] WARNING: possible circular locking dependency detected
>>
>> [...]
>>
>> > Are you ware? Any idea what's wrong?
>> > Is there some stable fix I'm missing in bpf-next?
>>
>> Thanks for bringing it up. I didn't know.
>>
>> The mentioned commit doesn't look that fresh
>>
>> commit c0feea594e058223973db94c1c32a830c9807c86
>> Author: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
>> Date: Fri Jul 29 13:30:23 2022 +0900
>>
>> workqueue: don't skip lockdep work dependency in cancel_work_sync()
>>
>> ... but then it just landed not so long ago, which explains things:
>>
>> $ git describe --contains c0feea594e058223973db94c1c32a830c9807c86 --match 'v*'
>> v6.0-rc7~10^2
>>
>> I've untangled the call chains leading to the potential dead-lock a
>> bit. There does seem to be a window of opportunity there.
>>
>> psock->work.func = sk_psock_backlog()
>> ACQUIRE psock->work_mutex
>> sk_psock_handle_skb()
>> skb_send_sock()
>> __skb_send_sock()
>> sendpage_unlocked()
>> kernel_sendpage()
>> sock->ops->sendpage = inet_sendpage()
>> sk->sk_prot->sendpage = tcp_sendpage()
>> ACQUIRE sk->sk_lock
>> tcp_sendpage_locked()
>> RELEASE sk->sk_lock
>> RELEASE psock->work_mutex
>>
>> sock_map_close()
>> ACQUIRE sk->sk_lock
>> sk_psock_stop()
>> sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED)
>> cancel_work_sync()
>> __cancel_work_timer()
>> __flush_work()
>> // wait for psock->work to finish
>> RELEASE sk->sk_lock
>>
>> There is no fix I know of. Need to think. Ideas welcome.
>>
>
> Thanks for the analysis.
>
> I wonder if we can simply move this cancel_work_sync() out of sock
> lock... Something like this:
[...]
> diff --git a/net/core/sock_map.c b/net/core/sock_map.c
> index a660baedd9e7..81beb16ab1eb 100644
> --- a/net/core/sock_map.c
> +++ b/net/core/sock_map.c
> @@ -1596,7 +1596,7 @@ void sock_map_destroy(struct sock *sk)
> saved_destroy = psock->saved_destroy;
> sock_map_remove_links(sk, psock);
> rcu_read_unlock();
> - sk_psock_stop(psock, false);
> + sk_psock_stop(psock);
> sk_psock_put(sk, psock);
> saved_destroy(sk);
> }
> @@ -1619,9 +1619,10 @@ void sock_map_close(struct sock *sk, long timeout)
> saved_close = psock->saved_close;
> sock_map_remove_links(sk, psock);
> rcu_read_unlock();
> - sk_psock_stop(psock, true);
> - sk_psock_put(sk, psock);
> + sk_psock_stop(psock);
> release_sock(sk);
> + cancel_work_sync(&psock->work);
> + sk_psock_put(sk, psock);
> saved_close(sk, timeout);
> }
> EXPORT_SYMBOL_GPL(sock_map_close);
Sorry for the delay. I've been out.
Great idea. I don't see why not.
Powered by blists - more mailing lists