[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-u2O_MO1NWP42kL@pc636>
Date: Tue, 1 Apr 2025 11:47:39 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Huan Yang <link@...o.com>,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Cc: "Vishal Moola (Oracle)" <vishal.moola@...il.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Brendan Jackman <jackmanb@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 0/1] Introduce vmap_file()
On Tue, Apr 01, 2025 at 02:08:53PM +0800, Huan Yang wrote:
>
> 在 2025/4/1 11:19, Vishal Moola (Oracle) 写道:
> > On Tue, Apr 01, 2025 at 10:21:46AM +0800, Huan Yang wrote:
> > > 在 2025/4/1 09:50, Vishal Moola (Oracle) 写道:
> > > > On Mon, Mar 31, 2025 at 10:05:53AM +0800, Huan Yang wrote:
> > > > > HI Vishal,
> > > > >
> > > > > 在 2025/3/29 05:13, Vishal Moola (Oracle) 写道:
> > > > > > Currently, users have to call vmap() or vmap_pfn() to map pages to
> > > > > > kernel virtual space. vmap_pfn() is for special pages (i.e. pfns
> > > > > > without struct page). vmap() handles normal pages.
> > > > > >
> > > > > > With large folios, we may want to map ranges that only span
> > > > > > part of a folio (i.e. mapping half of a 2Mb folio).
> > > > > > vmap_file() will allow us to do so.
> > > > > You mention vmap_file can support range folio vmap, but when I look code, I can't figure out
> > > > >
> > > > > how to use, maybe I missed something? :)
> > > > I took a look at the udma-buf code. Rather than iterating through the
> > > > folios using pfns, you can calculate the corresponding file offsets
> > > > (maybe you already have them?) to map the desired folios.
> > > Currently udmabuf folio's not simple based on file(even each memory from memfd). User can provide
> > >
> > > random range of memfd to udmabuf to use. For example:
> > >
> > > We get a memfd maybe 4M, user split it into [0, 2M), [1M, 2M), [2M, 4M), so you can see 1M-2M range repeat.
> > >
> > > This range can gathered by udmabuf_create_list, then udmabuf use it. So, udmabuf record it by folio array+offset array.
> > I was thinking you could call vmap_file() on every sub-range and use
> > those addresses. It should work, we'd have to look at making udmabuf api's
> > support it.
>
> Hmmm, how to get contigous virtual address? Or there are a way to merge each split vmap's return address?
>
The patch in question maps whole file to continues memory as i see, but
i can miss something. Partly populate technique requires to get an area
and partly populate it.
As i see we have something similar:
<snip>
/**
* vm_area_map_pages - map pages inside given sparse vm_area
* @area: vm_area
* @start: start address inside vm_area
* @end: end address inside vm_area
* @pages: pages to map (always PAGE_SIZE pages)
*/
int vm_area_map_pages(struct vm_struct *area, unsigned long start,
unsigned long end, struct page **pages)
{
...
<snip>
it is used by the BPF.
--
Uladzislau Rezki
Powered by blists - more mailing lists