[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250718115112.3881129-4-ymaman@nvidia.com>
Date: Fri, 18 Jul 2025 14:51:10 +0300
From: Yonatan Maman <ymaman@...dia.com>
To: Jérôme Glisse <jglisse@...hat.com>, Andrew Morton
<akpm@...ux-foundation.org>, Jason Gunthorpe <jgg@...pe.ca>, Leon Romanovsky
<leon@...nel.org>
CC: Lyude Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>, "David
Airlie" <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Alistair Popple
<apopple@...dia.com>, Ben Skeggs <bskeggs@...dia.com>, Michael Guralnik
<michaelgur@...dia.com>, Or Har-Toov <ohartoov@...dia.com>, Daisuke Matsuda
<dskmtsd@...il.com>, Shay Drory <shayd@...dia.com>, <linux-mm@...ck.org>,
<linux-rdma@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>,
<nouveau@...ts.freedesktop.org>, <linux-kernel@...r.kernel.org>, "Yonatan
Maman" <Ymaman@...dia.com>, Gal Shalom <GalShalom@...dia.com>
Subject: [PATCH v2 3/5] IB/core: P2P DMA for device private pages
From: Yonatan Maman <Ymaman@...dia.com>
Add Peer-to-Peer (P2P) DMA request for hmm_range_fault calling,
utilizing capabilities introduced in mm/hmm. By setting
range.default_flags to HMM_PFN_REQ_FAULT | HMM_PFN_REQ_TRY_P2P, HMM
attempts to initiate P2P DMA connections for device private pages
(instead of page fault handling).
This enhancement utilizes P2P DMA to reduce performance overhead
during data migration between devices (e.g., GPU) and system memory,
providing performance benefits for GPU-centric applications that
utilize RDMA and device private pages.
Signed-off-by: Yonatan Maman <Ymaman@...dia.com>
Signed-off-by: Gal Shalom <GalShalom@...dia.com>
---
drivers/infiniband/core/umem_odp.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index b1c44ec1a3f3..7ba80ed4977c 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -362,6 +362,10 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
range.default_flags |= HMM_PFN_REQ_WRITE;
}
+ if (access_mask & HMM_PFN_ALLOW_P2P)
+ range.default_flags |= HMM_PFN_ALLOW_P2P;
+
+ range.pfn_flags_mask = HMM_PFN_ALLOW_P2P;
range.hmm_pfns = &(umem_odp->map.pfn_list[pfn_start_idx]);
timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
--
2.34.1
Powered by blists - more mailing lists