[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bcbbc2e9-858f-46ed-909e-1d911dd614f0@vivo.com>
Date: Tue, 18 Mar 2025 16:39:40 +0800
From: Huan Yang <link@...o.com>
To: Christoph Hellwig <hch@....de>
Cc: akpm@...ux-foundation.org, bingbu.cao@...ux.intel.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
lorenzo.stoakes@...cle.com, opensource.kernel@...o.com, rppt@...nel.org,
ryan.roberts@....com, urezki@...il.com, ziy@...dia.com,
vivek.kasireddy@...el.com
Subject: Re: [PATCH] mm/vmalloc: fix mischeck pfn valid in vmap_pfns
HI Christoph
在 2025/3/18 16:33, Christoph Hellwig 写道:
> [You don't often get email from hch@....de. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>
> On Tue, Mar 18, 2025 at 04:20:17PM +0800, Huan Yang wrote:
>> This prevents us from properly invoking vmap, which is why we have turned to using vmap_pfn instead.
>>
>> Even if a folio-based vmap is implemented, it still cannot handle mapping multiple folio ranges of physical
>>
>> memory to vmalloc regions. A range of folio is important, it maybe an offset in memfd, no need entire folio.
>>
>> So, I still consider vmap_pfn to be the optimal solution for this specific scenario. :)
> No, vmap_pfn is entirely for memory not backed by pages or folios,
> i.e. PCIe BARs and similar memory. This must not be mixed with proper
> folio backed memory.
OK, I learn it more.
>
> So you'll need a vmap for folios to support this use case.
May can't
>
>>> historically backed by pages and now folios.
>> So by HVO, it also not backed by pages, only contains folio head, each tail pfn's page struct go away.
> And a fully folios based vmap solves that problem.
A folio may be 2MB or more large 1GB, what if we only need a little, 1M or 512MB, can vmap based on folio can solve it?
Normally, can offer 4k-page based array map it. But consider HVO, can't. That's why wanto base on pfn.
Thank you
Huan Yang
>
Powered by blists - more mailing lists