[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d762c95-e4ca-d612-f70f-64789d4624cf@uls.co.za>
Date: Wed, 26 Jul 2023 17:26:12 +0200
From: Jaco Kroon <jaco@....co.za>
To: Bernd Schubert <bernd.schubert@...tmail.fm>,
Miklos Szeredi <miklos@...redi.hu>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fuse: enable larger read buffers for readdir.
Hi,
On 2023/07/26 15:53, Bernd Schubert wrote:
>
>
> On 7/26/23 12:59, Jaco Kroon wrote:
>> Signed-off-by: Jaco Kroon <jaco@....co.za>
>> ---
>> fs/fuse/Kconfig | 16 ++++++++++++++++
>> fs/fuse/readdir.c | 42 ++++++++++++++++++++++++------------------
>> 2 files changed, 40 insertions(+), 18 deletions(-)
>>
>> diff --git a/fs/fuse/Kconfig b/fs/fuse/Kconfig
>> index 038ed0b9aaa5..0783f9ee5cd3 100644
>> --- a/fs/fuse/Kconfig
>> +++ b/fs/fuse/Kconfig
>> @@ -18,6 +18,22 @@ config FUSE_FS
>> If you want to develop a userspace FS, or if you want to use
>> a filesystem based on FUSE, answer Y or M.
>> +config FUSE_READDIR_ORDER
>> + int
>> + range 0 5
>> + default 5
>> + help
>> + readdir performance varies greatly depending on the size of
>> the read.
>> + Larger buffers results in larger reads, thus fewer reads and
>> higher
>> + performance in return.
>> +
>> + You may want to reduce this value on seriously constrained
>> memory
>> + systems where 128KiB (assuming 4KiB pages) cache pages is
>> not ideal.
>> +
>> + This value reprents the order of the number of pages to
>> allocate (ie,
>> + the shift value). A value of 0 is thus 1 page (4KiB) where
>> 5 is 32
>> + pages (128KiB).
>> +
>
> I like the idea of a larger readdir size, but shouldn't that be a
> server/daemon/library decision which size to use, instead of kernel
> compile time? So should be part of FUSE_INIT negotiation?
Yes sure, but there still needs to be a default. And one page at a time
doesn't cut it.
-- snip --
>> - page = alloc_page(GFP_KERNEL);
>> + page = alloc_pages(GFP_KERNEL, READDIR_PAGES_ORDER);
>
> I guess that should become folio alloc(), one way or the other. Now I
> think order 0 was chosen before to avoid risk of allocation failure. I
> guess it might work to try a large size and to fall back to 0 when
> that failed. Or fail back to the slower vmalloc.
If this varies then a bunch of other code will become somewhat more
complex, especially if one alloc succeeds, and then a follow-up succeeds.
I'm not familiar with the differences between the different mechanisms
available for allocation.
-- snip --
> Thanks,
My pleasure,
Jaco
Powered by blists - more mailing lists