[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <60622d735adf3_401fb208d0@john-XPS-13-9370.notmuch>
Date: Mon, 29 Mar 2021 12:41:39 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Cong Wang <xiyou.wangcong@...il.com>, netdev@...r.kernel.org
Cc: bpf@...r.kernel.org, duanxiongchun@...edance.com,
wangdongdong.6@...edance.com, jiang.wang@...edance.com,
Cong Wang <cong.wang@...edance.com>,
John Fastabend <john.fastabend@...il.com>,
Daniel Borkmann <daniel@...earbox.net>,
Jakub Sitnicki <jakub@...udflare.com>,
Lorenz Bauer <lmb@...udflare.com>
Subject: RE: [Patch bpf-next v7 04/13] skmsg: avoid lock_sock() in
sk_psock_backlog()
Cong Wang wrote:
> From: Cong Wang <cong.wang@...edance.com>
>
> We do not have to lock the sock to avoid losing sk_socket,
> instead we can purge all the ingress queues when we close
> the socket. Sending or receiving packets after orphaning
> socket makes no sense.
>
> We do purge these queues when psock refcnt reaches zero but
> here we want to purge them explicitly in sock_map_close().
> There are also some nasty race conditions on testing bit
> SK_PSOCK_TX_ENABLED and queuing/canceling the psock work,
> we can expand psock->ingress_lock a bit to protect them too.
>
> As noticed by John, we still have to lock the psock->work,
> because the same work item could be running concurrently on
> different CPU's.
>
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: Daniel Borkmann <daniel@...earbox.net>
> Cc: Jakub Sitnicki <jakub@...udflare.com>
> Cc: Lorenz Bauer <lmb@...udflare.com>
> Signed-off-by: Cong Wang <cong.wang@...edance.com>
> ---
Acked-by: John Fastabend <john.fastabend@...il.com>
Powered by blists - more mailing lists