[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260126120226.GA6424@j66a10360.sqa.eu95>
Date: Mon, 26 Jan 2026 20:02:26 +0800
From: "D. Wythe" <alibuda@...ux.alibaba.com >
To: Uladzislau Rezki <urezki@...il.com>
Cc: "D. Wythe" <alibuda@...ux.alibaba.com>,
"David S. Miller" <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Dust Li <dust.li@...ux.alibaba.com>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Sidraya Jayagond <sidraya@...ux.ibm.com>,
Wenjia Zhang <wenjia@...ux.ibm.com>,
Mahanta Jambigi <mjambigi@...ux.ibm.com>,
Simon Horman <horms@...nel.org>, Tony Lu <tonylu@...ux.alibaba.com>,
Wen Gu <guwen@...ux.alibaba.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-rdma@...r.kernel.org,
linux-s390@...r.kernel.org, netdev@...r.kernel.org,
oliver.yang@...ux.alibaba.com
Subject: Re: [PATCH net-next 2/3] mm: vmalloc: export find_vm_area()
On Mon, Jan 26, 2026 at 11:28:46AM +0100, Uladzislau Rezki wrote:
> Hello, D. Wythe!
>
> > > > On Fri, Jan 23, 2026 at 07:55:17PM +0100, Uladzislau Rezki wrote:
> > > > > On Fri, Jan 23, 2026 at 04:23:48PM +0800, D. Wythe wrote:
> > > > > > find_vm_area() provides a way to find the vm_struct associated with a
> > > > > > virtual address. Export this symbol to modules so that modularized
> > > > > > subsystems can perform lookups on vmalloc addresses.
> > > > > >
> > > > > > Signed-off-by: D. Wythe <alibuda@...ux.alibaba.com>
> > > > > > ---
> > > > > > mm/vmalloc.c | 1 +
> > > > > > 1 file changed, 1 insertion(+)
> > > > > >
> > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > > > > index ecbac900c35f..3eb9fe761c34 100644
> > > > > > --- a/mm/vmalloc.c
> > > > > > +++ b/mm/vmalloc.c
> > > > > > @@ -3292,6 +3292,7 @@ struct vm_struct *find_vm_area(const void *addr)
> > > > > >
> > > > > > return va->vm;
> > > > > > }
> > > > > > +EXPORT_SYMBOL_GPL(find_vm_area);
> > > > > >
> > > > > This is internal. We can not just export it.
> > > > >
> > > > > --
> > > > > Uladzislau Rezki
> > > >
> > > > Hi Uladzislau,
> > > >
> > > > Thank you for the feedback. I agree that we should avoid exposing
> > > > internal implementation details like struct vm_struct to external
> > > > subsystems.
> > > >
> > > > Following Christoph's suggestion, I'm planning to encapsulate the page
> > > > order lookup into a minimal helper instead:
> > > >
> > > > unsigned int vmalloc_page_order(const void *addr){
> > > > struct vm_struct *vm;
> > > > vm = find_vm_area(addr);
> > > > return vm ? vm->page_order : 0;
> > > > }
> > > > EXPORT_SYMBOL_GPL(vmalloc_page_order);
> > > >
> > > > Does this approach look reasonable to you? It would keep the vm_struct
> > > > layout private while satisfying the optimization needs of SMC.
> > > >
> > > Could you please clarify why you need info about page_order? I have not
> > > looked at your second patch.
> > >
> > > Thanks!
> > >
> > > --
> > > Uladzislau Rezki
> >
> > Hi Uladzislau,
> >
> > This stems from optimizing memory registration in SMC-R. To provide the
> > RDMA hardware with direct access to memory buffers, we must register
> > them with the NIC. During this process, the hardware generates one MTT
> > entry for each physically contiguous block. Since these hardware entries
> > are a finite and scarce resource, and SMC currently defaults to a 4KB
> > registration granularity, a single 2MB buffer consumes 512 entries. In
> > high-concurrency scenarios, this inefficiency quickly exhausts NIC
> > resources and becomes a major bottleneck for system scalability.
> >
> > To address this, we intend to use vmalloc_huge(). When it successfully
> > allocates high-order pages, the vmalloc area is backed by a sequence of
> > physically contiguous chunks (e.g., 2MB each). If we know this
> > page_order, we can register these larger physical blocks instead of
> > individual 4KB pages, reducing MTT consumption from 512 entries down to
> > 1 for every 2MB of memory (with page_order == 9).
> >
> > However, the result of vmalloc_huge() is currently opaque to the caller.
> > We cannot determine whether it successfully allocated huge pages or fell
> > back to 4KB pages based solely on the returned pointer. Therefore, we
> > need a helper function to query the actual page order, enabling SMC-R to
> > adapt its registration logic to the underlying physical layout.
> >
> > I hope this clarifies our design motivation!
> >
> Appreciate for the explanation. Yes it clarifies an intention.
>
> As for proposed patch above:
>
> - A page_order is available if CONFIG_HAVE_ARCH_HUGE_VMALLOC is defined;
> - It makes sense to get a node, grab a spin-lock and find VM, save
> page_order and release the lock.
>
> You can have a look at the vmalloc_dump_obj(void *object) function.
> We try-spinlock there whereas you need just spin-lock. But the idea
> is the same.
>
> --
> Uladzislau Rezki
Hi Uladzislau,
Thanks very much for the detailed guidance, especially on the correct
locking pattern. This is extremely helpful.I will follow it and send
a v2 patch series with the new helper implemented in mm/vmalloc.c.
Thanks again for your support.
Best regards,
D. Wythe
Powered by blists - more mailing lists