[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <996c64ca-8e97-2143-9227-ce65b89ae35e@huaweicloud.com>
Date: Wed, 24 Dec 2025 09:37:39 +0800
From: Hou Tao <houtao@...weicloud.com>
To: Leon Romanovsky <leon@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
linux-mm@...ck.org, linux-nvme@...ts.infradead.org,
Bjorn Helgaas <bhelgaas@...gle.com>, Logan Gunthorpe <logang@...tatee.com>,
Alistair Popple <apopple@...dia.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Tejun Heo <tj@...nel.org>,
"Rafael J . Wysocki" <rafael@...nel.org>, Danilo Krummrich
<dakr@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...nel.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Keith Busch
<kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
houtao1@...wei.com
Subject: Re: [PATCH 00/13] Enable compound page for p2pdma memory
On 12/24/2025 9:18 AM, Hou Tao wrote:
> Hi,
>
> On 12/21/2025 8:19 PM, Leon Romanovsky wrote:
>> On Sat, Dec 20, 2025 at 12:04:33PM +0800, Hou Tao wrote:
>>> From: Hou Tao <houtao1@...wei.com>
>>>
>>> Hi,
>>>
>>> device-dax has already supported compound page. It not only reduces the
>>> cost of struct page significantly, it also improve the performance of
>>> get_user_pages when 2MB or 1GB page size is used. We are experimenting
>>> to use p2p dma to directly transfer the content of NVMe SSD into NPU.
>> I’ll admit my understanding here is limited, and lately everything tends
>> to look like a DMABUF problem to me. Could you explain why DMABUF support
>> is not being used for this use case?
> I have limited knowledge of dma-buf, so correct me if I am wrong. It
> seems that as for now there is no available way to use the dma-buf to
> read/write files. For the userspace vaddr backended by the dma-buf, it
> is a PFN mapping, get_user_pages() will reject such address.
Hit the send button too soon :) So In my understanding, the advantage of
dma-buf is that it doesn't need struct page, and it also means that it
needs special handling to support IO from/to dma-buf (e.g., [RFC v2
00/11] Add dmabuf read/write via io_uring [1])
[1]
https://lore.kernel.org/io-uring/cover.1763725387.git.asml.silence@gmail.com/
>> Thanks
>>
>>> The size of NPU HBM is 32GB or larger and there are at most 8 NPUs in
>>> the host. When using the base page, the memory overhead is about 4GB for
>>> 128GB HBM, and the mapping of 32GB HBM into userspace takes about 0.8
>>> second. Considering ZONE_DEVICE memory type has already supported the
>>> compound page, enabling the compound page support for p2pdma memory as
>>> well. After applying the patch set, when using the 1GB page, the memory
>>> overhead is about 2MB and the mmap costs about 0.04 ms.
>>>
>>> The main difference between the compound page support of device-dax and
>>> p2pdma is that p2pdma inserts the page into user vma during mmap instead
>>> of page fault. The main reason is simplicity. The patch set is
>>> structured as shown below:
>>>
>>> Patch #1~#2: tiny bug fixes for p2pdma
>>> Patch #3~#5: add callbacks support in kernfs and sysfs, include
>>> pagesize, may_split and get_unmapped_area. These callbacks are necessary
>>> for the support of compound page when mmaping sysfs binary file.
>>> Patch #6~#7: create compound page for p2pdma memory in the kernel.
>>> Patch #8~#10: support the mapping of compound page in userspace.
>>> Patch #11~#12: support the compound page for NVMe CMB.
>>> Patch #13: enable the support for compound page for p2pdma memory.
>>>
>>> Please see individual patches for more details. Comments and
>>> suggestions are always welcome.
>>>
>>> Hou Tao (13):
>>> PCI/P2PDMA: Release the per-cpu ref of pgmap when vm_insert_page()
>>> fails
>>> PCI/P2PDMA: Fix the warning condition in p2pmem_alloc_mmap()
>>> kernfs: add support for get_unmapped_area callback
>>> kernfs: add support for may_split and pagesize callbacks
>>> sysfs: support get_unmapped_area callback for binary file
>>> PCI/P2PDMA: add align parameter for pci_p2pdma_add_resource()
>>> PCI/P2PDMA: create compound page for aligned p2pdma memory
>>> mm/huge_memory: add helpers to insert huge page during mmap
>>> PCI/P2PDMA: support get_unmapped_area to return aligned vaddr
>>> PCI/P2PDMA: support compound page in p2pmem_alloc_mmap()
>>> PCI/P2PDMA: add helper pci_p2pdma_max_pagemap_align()
>>> nvme-pci: introduce cmb_devmap_align module parameter
>>> PCI/P2PDMA: enable compound page support for p2pdma memory
>>>
>>> drivers/accel/habanalabs/common/hldio.c | 3 +-
>>> drivers/nvme/host/pci.c | 10 +-
>>> drivers/pci/p2pdma.c | 140 ++++++++++++++++++++++--
>>> fs/kernfs/file.c | 79 +++++++++++++
>>> fs/sysfs/file.c | 15 +++
>>> include/linux/huge_mm.h | 4 +
>>> include/linux/kernfs.h | 3 +
>>> include/linux/pci-p2pdma.h | 30 ++++-
>>> include/linux/sysfs.h | 4 +
>>> mm/huge_memory.c | 66 +++++++++++
>>> 10 files changed, 339 insertions(+), 15 deletions(-)
>>>
>>> --
>>> 2.29.2
>>>
>>>
Powered by blists - more mailing lists