[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3af0752f-0534-43c4-913f-4d4df8c8501b@gmail.com>
Date: Fri, 1 Dec 2023 14:51:36 +0800
From: Dongyun Liu <dongyun.liu3@...il.com>
To: Jens Axboe <axboe@...nel.dk>, minchan@...nel.org,
senozhatsky@...omium.org
Cc: linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
lincheng.yang@...nssion.com, jiajun.ling@...nssion.com,
ldys2014@...mail.com, Dongyun Liu <dongyun.liu@...nssion.com>
Subject: Re: [PATCH] zram: Using GFP_ATOMIC instead of GFP_KERNEL to allocate
bitmap memory in backing_dev_store
On 2023/11/30 23:37, Jens Axboe wrote:
> On 11/30/23 8:20 AM, Dongyun Liu wrote:
>> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
>> index d77d3664ca08..ee6c22c50e09 100644
>> --- a/drivers/block/zram/zram_drv.c
>> +++ b/drivers/block/zram/zram_drv.c
>> @@ -514,7 +514,7 @@ static ssize_t backing_dev_store(struct device *dev,
>>
>> nr_pages = i_size_read(inode) >> PAGE_SHIFT;
>> bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long);
>> - bitmap = kvzalloc(bitmap_sz, GFP_KERNEL);
>> + bitmap = kmalloc(bitmap_sz, GFP_ATOMIC);
>> if (!bitmap) {
>> err = -ENOMEM;
>> goto out;
>
> Outside of this moving from a zeroed alloc to one that does not, the
> change looks woefully incomplete. Why does this allocation need to be
> GFP_ATOMIC, and:
By using GFP_ATOMIC, it indicates that the caller cannot reclaim or
sleep, although we can prevent the risk of deadlock when acquiring the
zram->lock again in zram_bvec_write.
>
> 1) file_name = kmalloc(PATH_MAX, GFP_KERNEL); does not
There is no zram->init_lock held here, so there is no need to use
GFP_ATOMIC.
> 2) filp_open() -> getname_kernel() -> __getname() does not
> 3) filp_open() -> getname_kernel() does not
> 4) bdev_open_by_dev() does not
Missing the use of GFP_ATOMIC.
>
> IOW, you have a slew of GFP_KERNEL allocations in there, and you
> probably just patched the largest one. But the core issue remains.
>
> The whole handling of backing_dev_store() looks pretty broken.
>
Indeed, this patch only solves the biggest problem and does not
fundamentally solve it, because there are many processes for holding
zram->init_lock before allocation memory in backing_dev_store that need
to be fully modified, and I did not consider it thoroughly. Obviously,
a larger and better patch is needed to eliminate this risk, but it is
currently not necessary.
Thank you for your kind and patient.
Powered by blists - more mailing lists