[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3045529a-11a3-421d-8da3-94788f12f6f4@linux.alibaba.com>
Date: Wed, 27 Mar 2024 14:12:24 +0800
From: Wen Gu <guwen@...ux.alibaba.com>
To: "Antipov, Dmitriy" <Dmitriy.Antipov@...tline.com>,
"gbayer@...ux.ibm.com" <gbayer@...ux.ibm.com>,
"wenjia@...ux.ibm.com" <wenjia@...ux.ibm.com>,
"jaka@...ux.ibm.com" <jaka@...ux.ibm.com>
Cc: "lvc-project@...uxtesting.org" <lvc-project@...uxtesting.org>,
"Shvetsov, Alexander" <Alexander.Shvetsov@...tline.com>,
"linux-s390@...r.kernel.org" <linux-s390@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [lvc-project] [PATCH] [RFC] net: smc: fix fasync leak in
smc_release()
On 2024/3/26 16:18, Antipov, Dmitriy wrote:
> On Thu, 2024-03-07 at 13:21 +0300, Dmitry Antipov wrote:
>
>> On Thu, 2024-03-07 at 10:57 +0100, Jan Karcher wrote:
>>
>>> We think it might be an option to secure the path in this function with
>>> the smc->clcsock_release_lock.
>>>
>>> ```
>>> lock_sock(&smc->sk);
>>> if (smc->use_fallback) {
>>> if (!smc->clcsock) {
>>> release_sock(&smc->sk);
>>> return -EBADF;
>>> }
>>> + mutex_lock(&smc->clcsock_release_lock);
>>> answ = smc->clcsock->ops->ioctl(smc->clcsock, cmd, arg);
>>> + mutex_unlock(&smc->clcsock_release_lock);
>>> release_sock(&smc->sk);
>>> return answ;
>>> }
>>> ```
>>>
>>> What do yo think about this?
>>
>> You're trying to fix it on the wrong path. FIOASYNC is a generic rather
>> than protocol-specific thing. So userspace 'ioctl(sock, FIOASYNC, [])'
>> call is handled with:
>>
>> -> sys_ioctl()
>> -> do_vfs_ioctl()
>> -> ioctl_fioasync()
>> -> filp->f_op->fasync() (which is sock_fasync() for all sockets)
>>
>> rather than 'sock->ops->ioctl(...)'.
>
> Any progress on this?
Hi Dmitry,
In my opinion, first we need to figure out what the root cause(race) of this leak is.
I am not very convinced about your analysis[1] and gave some my thoughts about it[2].
I would appreciate if you give your response about that to make this issue clearer and
get everyone on the same page (including SMC maintainers). Then we can see if your other
proposal[3] is a proper solution to the issue or if anyone has a better idea.
[1] https://lore.kernel.org/netdev/35584a9f-f4c2-423a-8bb8-2c729cedb6fe@yandex.ru/
[2] https://lore.kernel.org/netdev/a88a0731-6cbe-4987-b1e9-afa51f9ab057@linux.alibaba.com/
[3] https://lore.kernel.org/netdev/625c9519-7ae6-43a3-a5d0-81164ad7fd0e@yandex.ru/
Thanks.
>
> Dmitry
>
Powered by blists - more mailing lists