[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d18e1a78-3b3a-8f23-6db1-20c16795d3ef@linux.ibm.com>
Date: Tue, 26 Sep 2023 09:18:48 +0200
From: Alexandra Winter <wintera@...ux.ibm.com>
To: "D. Wythe" <alibuda@...ux.alibaba.com>,
Wenjia Zhang <wenjia@...ux.ibm.com>, kgraul@...ux.ibm.com,
jaka@...ux.ibm.com
Cc: kuba@...nel.org, davem@...emloft.net, netdev@...r.kernel.org,
linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: Re: [PATCH net] net/smc: fix panic smc_tcp_syn_recv_sock() while
closing listen socket
On 26.09.23 05:00, D. Wythe wrote:
> You are right. The key point is how to ensure the valid of smc sock during the life time of clc sock, If so, READ_ONCE is good
> enough. Unfortunately, I found that there are no such guarantee, so it's still a life-time problem.
Did you discover a scenario, where clc sock could live longer than smc sock?
Wouldn't that be a dangerous scenario in itself? I still have some hope that the lifetime of an smc socket is by design longer
than that of the corresponding tcp socket.
Considering the const, maybe
> we need to do :
>
> 1. hold a refcnt of smc_sock for syn_recv_sock to keep smc sock valid during life time of clc sock
> 2. put the refcnt of smc_sock in sk_destruct in tcp_sock to release the very smc sock .
>
> In that way, we can always make sure the valid of smc sock during the life time of clc sock. Then we can use READ_ONCE rather
> than lock. What do you think ?
I am not sure I fully understand the details what you propose to do. And it is not only syn_recv_sock(), right?
You need to consider all relations between smc socks and tcp socks; fallback to tcp, initial creation, children of listen sockets, variants of shutdown, ... Preferrably a single simple mechanism covers all situations. Maybe there is such a mechanism already today?
(I don't think clcsock->sk->sk_user_data or sk_callback_lock provide this general coverage)
If we really have a gap, a general refcnt'ing on smc sock could be a solution, but needs to be designed carefully.
Many thanks to you and the team to help make smc more stable and robust.
Powered by blists - more mailing lists