[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2026737.7mX0AZtNi0@silver>
Date: Wed, 30 Jul 2025 19:28:02 +0200
From: Christian Schoenebeck <linux_oss@...debyte.com>
To: v9fs@...ts.linux.dev, Pierre Barre <pierre@...re.sh>
Cc: ericvh@...nel.org, lucho@...kov.net, asmadeus@...ewreck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] 9p: Use kvmalloc for message buffers
On Wednesday, July 30, 2025 6:19:37 PM CEST Pierre Barre wrote:
> Thank you for your email.
>
> > What was msize?
>
> I was mounting the filesystem using:
>
> trans=tcp,port=5564,version=9p2000.L,msize=1048576,cache=mmap,access=user
Yeah, that explains both why you were triggering this issue, as 1M msize will
likely cause kmalloc() failures under heavy load, and why your patch was
working for you due to chosen tcp transport.
> > That would work with certain transports like fd I guess, but not via
> > virtio-pci transport for instance, since PCI-DMA requires physical pages. Same
> > applies to Xen transport I guess.
>
> Would it be acceptable to add a mount option (like nocontig or loosealloc?) that enables kvmalloc?
Dominique's call obviously, I'm just giving my few cents here. To me it would
make sense to fix the root cause instead of shorting a symptom:
Right now 9p filesystem code (under fs/9p) requires a linear buffer, whereas
some 9p transports (under net/9p) require physical pages, and the latter is
not going to change.
One solution therefore might be changing fs/9p code to work on a scatter/
gather list instead of a simple linear buffer. But I guess that would be too
much work.
So a more reasonable solution instead might be using kvmalloc(), as suggested
by you, and adjusting the individual transports such that they translate a
virtual memory address to a list of physical addresses via e.g.
vmalloc_to_page() if needed.
/Christian
Powered by blists - more mailing lists