[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1fcc97fd-bf32-4ea1-82c1-74a8efb6359b@app.fastmail.com>
Date: Wed, 30 Jul 2025 18:19:37 +0200
From: "Pierre Barre" <pierre@...re.sh>
To: "Christian Schoenebeck" <linux_oss@...debyte.com>, v9fs@...ts.linux.dev
Cc: ericvh@...nel.org, lucho@...kov.net, asmadeus@...ewreck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] 9p: Use kvmalloc for message buffers
Thank you for your email.
> What was msize?
I was mounting the filesystem using:
trans=tcp,port=5564,version=9p2000.L,msize=1048576,cache=mmap,access=user
> That would work with certain transports like fd I guess, but not via
> virtio-pci transport for instance, since PCI-DMA requires physical pages. Same
> applies to Xen transport I guess.
Would it be acceptable to add a mount option (like nocontig or loosealloc?) that enables kvmalloc?
Best,
Pierre
On Wed, Jul 30, 2025, at 18:08, Christian Schoenebeck wrote:
> On Wednesday, July 30, 2025 5:08:05 PM CEST Pierre Barre wrote:
>> While developing a 9P server (https://github.com/Barre/ZeroFS) and testing it under high-load, I was running into allocation failures. The failures occur even with plenty of free memory available because kmalloc requires contiguous physical memory.
>>
>> This results in errors like:
>> ls: page allocation failure: order:7, mode:0x40c40(GFP_NOFS|__GFP_COMP)
>
> What was msize?
>
>> Signed-off-by: Pierre Barre <pierre@...re.sh>
>> ---
>> net/9p/client.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/9p/client.c b/net/9p/client.c
>> index 5c1ca57ccd28..f82b5674057c 100644
>> --- a/net/9p/client.c
>> +++ b/net/9p/client.c
>> @@ -230,7 +230,7 @@ static int p9_fcall_init(struct p9_client *c, struct p9_fcall *fc,
>> fc->sdata = kmem_cache_alloc(c->fcall_cache, GFP_NOFS);
>> fc->cache = c->fcall_cache;
>> } else {
>> - fc->sdata = kmalloc(alloc_msize, GFP_NOFS);
>> + fc->sdata = kvmalloc(alloc_msize, GFP_NOFS);
>
> That would work with certain transports like fd I guess, but not via
> virtio-pci transport for instance, since PCI-DMA requires physical pages. Same
> applies to Xen transport I guess.
>
>> fc->cache = NULL;
>> }
>> if (!fc->sdata)
>> @@ -252,7 +252,7 @@ void p9_fcall_fini(struct p9_fcall *fc)
>> if (fc->cache)
>> kmem_cache_free(fc->cache, fc->sdata);
>> else
>> - kfree(fc->sdata);
>> + kvfree(fc->sdata);
>> }
>> EXPORT_SYMBOL(p9_fcall_fini);
>>
>>
Powered by blists - more mailing lists