[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <a671867f-153c-75a4-0f58-8dcb0d4f9c19@acm.org>
Date: Mon, 4 Jul 2022 21:34:07 -0700
From: Bart Van Assche <bvanassche@....org>
To: Hillf Danton <hdanton@...a.com>
Cc: Mike Christie <michael.christie@...cle.com>,
"lizhijian@...itsu.com" <lizhijian@...itsu.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Leon Romanovsky <leon@...nel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"target-devel@...r.kernel.org" <target-devel@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: use-after-free in srpt_enable_tpg()
On 7/3/22 17:11, Hillf Danton wrote:
> On Sun, 3 Jul 2022 07:55:05 -0700 Bart Van Assche wrote:
>> However, I'm not sure that would make a
>> significant difference since there is a similar while-loop in one of the
>> callers of srpt_remove_one() (disable_device() in the RDMA core).
>
> Hehe... feel free to shed light on how the loop in RDMA core is currently
> making the loop in srpt more prone to uaf?
In my email I was referring to the following code in disable_device():
wait_for_completion(&device->unreg_completion);
I think that code shows that device removal by the RDMA core is
synchronous in nature. Even if the ib_srpt source code would be modified
such that the objects referred by that code live longer, the wait loop
in disable_device() would wait for the ib_device reference counts to
drop to zero.
So I do not expect that modifying object lifetimes in ib_srpt.c can lead
to a solution.
Removing configfs directories from inside srpt_release_sport() could be
a solution. However, configfs does not have any API to remove
directories and I'm not aware of any plans to add such an API.
Additionally, several kernel maintainers disagree with invoking the
rmdir system call from inside kernel code.
A potential solution could be to decouple the lifetimes of the data
structures used for configfs (struct se_wwn and struct srpt_tpg) and the
data structures associated with RDMA objects (struct srpt_port). If
nobody else beats me to this I will try to find the time to implement
this approach.
Bart.
Powered by blists - more mailing lists