[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d9d2ede5-7454-4c43-ab5d-29816e266453@fastmail.fm>
Date: Thu, 14 Mar 2024 00:02:52 +0100
From: Bernd Schubert <bernd.schubert@...tmail.fm>
To: Hou Tao <houtao@...weicloud.com>, Miklos Szeredi <miklos@...redi.hu>
Cc: linux-fsdevel@...r.kernel.org, Vivek Goyal <vgoyal@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>, "Michael S . Tsirkin"
<mst@...hat.com>, Matthew Wilcox <willy@...radead.org>,
Benjamin Coddington <bcodding@...hat.com>, linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev, houtao1@...wei.com
Subject: Re: [PATCH v2 1/6] fuse: limit the length of ITER_KVEC dio by
max_pages
On 3/9/24 05:26, Hou Tao wrote:
> Hi,
>
> On 3/1/2024 9:42 PM, Miklos Szeredi wrote:
>> On Wed, 28 Feb 2024 at 15:40, Hou Tao <houtao@...weicloud.com> wrote:
>>
>>> So instead of limiting both the values of max_read and max_write in
>>> kernel, capping the maximal length of kvec iter IO by using max_pages in
>>> fuse_direct_io() just like it does for ubuf/iovec iter IO. Now the max
>>> value for max_pages is 256, so on host with 4KB page size, the maximal
>>> size passed to kmalloc() in copy_args_to_argbuf() is about 1MB+40B. The
>>> allocation of 2MB of physically contiguous memory will still incur
>>> significant stress on the memory subsystem, but the warning is fixed.
>>> Additionally, the requirement for huge physically contiguous memory will
>>> be removed in the following patch.
>> So the issue will be fixed properly by following patches?
>>
>> In that case this patch could be omitted, right?
>
> Sorry for the late reply. Being busy with off-site workshop these days.
>
> No, this patch is still necessary and it is used to limit the number of
> scatterlist used for fuse request and reply in virtio-fs. If the length
> of out_args[0].size is not limited, the number of scatterlist used to
> map the fuse request may be greater than the queue size of virtio-queue
> and the fuse request may hang forever.
I'm currently also totally busy and didn't carefully check, but isn't
there something missing that limits fc->max_write/fc->max_read?
Thanks,
Bernd
Powered by blists - more mailing lists