[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260124145754.GA57116@j66a10360.sqa.eu95>
Date: Sat, 24 Jan 2026 22:57:54 +0800
From: "D. Wythe" <alibuda@...ux.alibaba.com >
To: Uladzislau Rezki <urezki@...il.com>
Cc: "D. Wythe" <alibuda@...ux.alibaba.com>,
"David S. Miller" <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Dust Li <dust.li@...ux.alibaba.com>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Sidraya Jayagond <sidraya@...ux.ibm.com>,
Wenjia Zhang <wenjia@...ux.ibm.com>,
Mahanta Jambigi <mjambigi@...ux.ibm.com>,
Simon Horman <horms@...nel.org>, Tony Lu <tonylu@...ux.alibaba.com>,
Wen Gu <guwen@...ux.alibaba.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-rdma@...r.kernel.org,
linux-s390@...r.kernel.org, netdev@...r.kernel.org,
oliver.yang@...ux.alibaba.com
Subject: Re: [PATCH net-next 2/3] mm: vmalloc: export find_vm_area()
On Sat, Jan 24, 2026 at 11:48:59AM +0100, Uladzislau Rezki wrote:
> Hello, D. Wythe!
>
> > On Fri, Jan 23, 2026 at 07:55:17PM +0100, Uladzislau Rezki wrote:
> > > On Fri, Jan 23, 2026 at 04:23:48PM +0800, D. Wythe wrote:
> > > > find_vm_area() provides a way to find the vm_struct associated with a
> > > > virtual address. Export this symbol to modules so that modularized
> > > > subsystems can perform lookups on vmalloc addresses.
> > > >
> > > > Signed-off-by: D. Wythe <alibuda@...ux.alibaba.com>
> > > > ---
> > > > mm/vmalloc.c | 1 +
> > > > 1 file changed, 1 insertion(+)
> > > >
> > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > > index ecbac900c35f..3eb9fe761c34 100644
> > > > --- a/mm/vmalloc.c
> > > > +++ b/mm/vmalloc.c
> > > > @@ -3292,6 +3292,7 @@ struct vm_struct *find_vm_area(const void *addr)
> > > >
> > > > return va->vm;
> > > > }
> > > > +EXPORT_SYMBOL_GPL(find_vm_area);
> > > >
> > > This is internal. We can not just export it.
> > >
> > > --
> > > Uladzislau Rezki
> >
> > Hi Uladzislau,
> >
> > Thank you for the feedback. I agree that we should avoid exposing
> > internal implementation details like struct vm_struct to external
> > subsystems.
> >
> > Following Christoph's suggestion, I'm planning to encapsulate the page
> > order lookup into a minimal helper instead:
> >
> > unsigned int vmalloc_page_order(const void *addr){
> > struct vm_struct *vm;
> > vm = find_vm_area(addr);
> > return vm ? vm->page_order : 0;
> > }
> > EXPORT_SYMBOL_GPL(vmalloc_page_order);
> >
> > Does this approach look reasonable to you? It would keep the vm_struct
> > layout private while satisfying the optimization needs of SMC.
> >
> Could you please clarify why you need info about page_order? I have not
> looked at your second patch.
>
> Thanks!
>
> --
> Uladzislau Rezki
Hi Uladzislau,
This stems from optimizing memory registration in SMC-R. To provide the
RDMA hardware with direct access to memory buffers, we must register
them with the NIC. During this process, the hardware generates one MTT
entry for each physically contiguous block. Since these hardware entries
are a finite and scarce resource, and SMC currently defaults to a 4KB
registration granularity, a single 2MB buffer consumes 512 entries. In
high-concurrency scenarios, this inefficiency quickly exhausts NIC
resources and becomes a major bottleneck for system scalability.
To address this, we intend to use vmalloc_huge(). When it successfully
allocates high-order pages, the vmalloc area is backed by a sequence of
physically contiguous chunks (e.g., 2MB each). If we know this
page_order, we can register these larger physical blocks instead of
individual 4KB pages, reducing MTT consumption from 512 entries down to
1 for every 2MB of memory (with page_order == 9).
However, the result of vmalloc_huge() is currently opaque to the caller.
We cannot determine whether it successfully allocated huge pages or fell
back to 4KB pages based solely on the returned pointer. Therefore, we
need a helper function to query the actual page order, enabling SMC-R to
adapt its registration logic to the underlying physical layout.
I hope this clarifies our design motivation!
Best regards,
D. Wythe
Powered by blists - more mailing lists