[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <844f8fc1-1c6c-0102-5412-df799cd327c5@huawei.com>
Date: Tue, 25 Feb 2020 17:13:19 +0800
From: Xu Zaibo <xuzaibo@...wei.com>
To: zhangfei <zhangfei.gao@...aro.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arnd Bergmann <arnd@...db.de>,
Herbert Xu <herbert@...dor.apana.org.au>,
<jonathan.cameron@...wei.com>, <dave.jiang@...el.com>,
<grant.likely@....com>, jean-philippe <jean-philippe@...aro.org>,
Jerome Glisse <jglisse@...hat.com>,
<ilias.apalodimas@...aro.org>, <francois.ozog@...aro.org>,
<kenneth-lee-2012@...mail.com>, Wangzhou <wangzhou1@...ilicon.com>,
"haojian . zhuang" <haojian.zhuang@...aro.org>,
<guodong.xu@...aro.org>
CC: <iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-accelerators@...ts.ozlabs.org>,
<linux-crypto@...r.kernel.org>
Subject: Re: [PATCH] uacce: unmap remaining mmapping from user space
Hi,
On 2020/2/25 16:33, zhangfei wrote:
> Hi, Zaibo
>
> On 2020/2/24 下午3:17, Xu Zaibo wrote:
>>> @@ -585,6 +595,13 @@ void uacce_remove(struct uacce_device *uacce)
>>> cdev_device_del(uacce->cdev, &uacce->dev);
>>> xa_erase(&uacce_xa, uacce->dev_id);
>>> put_device(&uacce->dev);
>>> +
>>> + /*
>>> + * unmap remainning mapping from user space, preventing user still
>>> + * access the mmaped area while parent device is already removed
>>> + */
>>> + if (uacce->inode)
>>> + unmap_mapping_range(uacce->inode->i_mapping, 0, 0, 1);
>> Should we unmap them at the first of 'uacce_remove', and before
>> 'uacce_put_queue'?
>>
> We can do this,
> Though it does not matter, since user space can not interrupt kernel
> function uacce_remove.
>
I think it matters :)
Image that the process holds the uacce queue is running(read and write
the queue), then you do 'uacce_remove'.
The process is running(read and write the queue) well in the time
between 'uacce_put_queue' and
'unmap_mapping_range', however, the queue with its DMA memory may be
gotten and used by
other guys in this time, since you have released them in kernel. As a
result, the running process will be a disaster.
cheers,
Zaibo
.
> Thanks
> .
>
Powered by blists - more mailing lists