[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a6574yz0.fsf@cloudflare.com>
Date: Thu, 03 Nov 2022 20:22:04 +0100
From: Jakub Sitnicki <jakub@...udflare.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: Cong Wang <xiyou.wangcong@...il.com>,
Cong Wang <cong.wang@...edance.com>, sdf@...gle.com,
netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [Patch bpf] sock_map: convert cancel_work_sync() to cancel_work()
On Tue, Nov 01, 2022 at 01:01 PM -07, John Fastabend wrote:
> Jakub Sitnicki wrote:
>> On Fri, Oct 28, 2022 at 12:16 PM -07, Cong Wang wrote:
>> > On Mon, Oct 24, 2022 at 03:33:13PM +0200, Jakub Sitnicki wrote:
>> >> On Tue, Oct 18, 2022 at 11:13 AM -07, sdf@...gle.com wrote:
>> >> > On 10/17, Cong Wang wrote:
>> >> >> From: Cong Wang <cong.wang@...edance.com>
>> >> >
>> >> >> Technically we don't need lock the sock in the psock work, but we
>> >> >> need to prevent this work running in parallel with sock_map_close().
>> >> >
>> >> >> With this, we no longer need to wait for the psock->work synchronously,
>> >> >> because when we reach here, either this work is still pending, or
>> >> >> blocking on the lock_sock(), or it is completed. We only need to cancel
>> >> >> the first case asynchronously, and we need to bail out the second case
>> >> >> quickly by checking SK_PSOCK_TX_ENABLED bit.
>> >> >
>> >> >> Fixes: 799aa7f98d53 ("skmsg: Avoid lock_sock() in sk_psock_backlog()")
>> >> >> Reported-by: Stanislav Fomichev <sdf@...gle.com>
>> >> >> Cc: John Fastabend <john.fastabend@...il.com>
>> >> >> Cc: Jakub Sitnicki <jakub@...udflare.com>
>> >> >> Signed-off-by: Cong Wang <cong.wang@...edance.com>
>> >> >
>> >> > This seems to remove the splat for me:
>> >> >
>> >> > Tested-by: Stanislav Fomichev <sdf@...gle.com>
>> >> >
>> >> > The patch looks good, but I'll leave the review to Jakub/John.
>> >>
>> >> I can't poke any holes in it either.
>> >>
>> >> However, it is harder for me to follow than the initial idea [1].
>> >> So I'm wondering if there was anything wrong with it?
>> >
>> > It caused a warning in sk_stream_kill_queues() when I actually tested
>> > it (after posting).
>>
>> We must have seen the same warnings. They seemed unrelated so I went
>> digging. We have a fix for these [1]. They were present since 5.18-rc1.
>>
>> >> This seems like a step back when comes to simplifying locking in
>> >> sk_psock_backlog() that was done in 799aa7f98d53.
>> >
>> > Kinda, but it is still true that this sock lock is not for sk_socket
>> > (merely for closing this race condition).
>>
>> I really think the initial idea [2] is much nicer. I can turn it into a
>> patch, if you are short on time.
>>
>> With [1] and [2] applied, the dead lock and memory accounting warnings
>> are gone, when running `test_sockmap`.
>>
>> Thanks,
>> Jakub
>>
>> [1] https://lore.kernel.org/netdev/1667000674-13237-1-git-send-email-wangyufen@huawei.com/
>> [2] https://lore.kernel.org/netdev/Y0xJUc%2FLRu8K%2FAf8@pop-os.localdomain/
>
> Cong, what do you think? I tend to agree [2] looks nicer to me.
>
> @Jakub,
>
> Also I think we could simply drop the proposed cancel_work_sync in
> sock_map_close()?
>
> }
> @@ -1619,9 +1619,10 @@ void sock_map_close(struct sock *sk, long timeout)
> saved_close = psock->saved_close;
> sock_map_remove_links(sk, psock);
> rcu_read_unlock();
> - sk_psock_stop(psock, true);
> - sk_psock_put(sk, psock);
> + sk_psock_stop(psock);
> release_sock(sk);
> + cancel_work_sync(&psock->work);
> + sk_psock_put(sk, psock);
> saved_close(sk, timeout);
> }
>
> The sk_psock_put is going to cancel the work before destroying the psock,
>
> sk_psock_put()
> sk_psock_drop()
> queue_rcu_work(system_wq, psock->rwork)
>
> and then in callback we
>
> sk_psock_destroy()
> cancel_work_synbc(psock->work)
>
> although it might be nice to have the work cancelled earlier rather than
> latter maybe.
Good point.
I kinda like the property that once close() returns we know there is no
deferred work running for the socket.
I find the APIs where a deferred cleanup happens sometimes harder to
write tests for.
But I don't really have a strong opinion here.
-Jakub
Powered by blists - more mailing lists