[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250718115112.3881129-5-ymaman@nvidia.com>
Date: Fri, 18 Jul 2025 14:51:11 +0300
From: Yonatan Maman <ymaman@...dia.com>
To: Jérôme Glisse <jglisse@...hat.com>, Andrew Morton
<akpm@...ux-foundation.org>, Jason Gunthorpe <jgg@...pe.ca>, Leon Romanovsky
<leon@...nel.org>
CC: Lyude Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>, "David
Airlie" <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Alistair Popple
<apopple@...dia.com>, Ben Skeggs <bskeggs@...dia.com>, Michael Guralnik
<michaelgur@...dia.com>, Or Har-Toov <ohartoov@...dia.com>, Daisuke Matsuda
<dskmtsd@...il.com>, Shay Drory <shayd@...dia.com>, <linux-mm@...ck.org>,
<linux-rdma@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>,
<nouveau@...ts.freedesktop.org>, <linux-kernel@...r.kernel.org>, "Yonatan
Maman" <Ymaman@...dia.com>, Gal Shalom <GalShalom@...dia.com>
Subject: [PATCH v2 4/5] RDMA/mlx5: Enable P2P DMA with fallback mechanism
From: Yonatan Maman <Ymaman@...dia.com>
Add support for P2P for MLX5 NIC devices with automatic fallback to
standard DMA when P2P mapping fails.
The change introduces P2P DMA requests by default using the
HMM_PFN_ALLOW_P2P flag. If P2P mapping fails with -EFAULT error, the
operation is retried without the P2P flag, ensuring a fallback to
standard DMA flow (using host memory).
Signed-off-by: Yonatan Maman <Ymaman@...dia.com>
Signed-off-by: Gal Shalom <GalShalom@...dia.com>
---
drivers/infiniband/hw/mlx5/odp.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index f6abd64f07f7..6a0171117f48 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -715,6 +715,10 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
if (odp->umem.writable && !downgrade)
access_mask |= HMM_PFN_WRITE;
+ /*
+ * try fault with HMM_PFN_ALLOW_P2P flag
+ */
+ access_mask |= HMM_PFN_ALLOW_P2P;
np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault);
if (np < 0)
return np;
@@ -724,6 +728,18 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
* ib_umem_odp_map_dma_and_lock already checks this.
*/
ret = mlx5r_umr_update_xlt(mr, start_idx, np, page_shift, xlt_flags);
+ if (ret == -EFAULT) {
+ /*
+ * Indicate P2P Mapping Error, retry with no HMM_PFN_ALLOW_P2P
+ */
+ mutex_unlock(&odp->umem_mutex);
+ access_mask &= ~(HMM_PFN_ALLOW_P2P);
+ np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault);
+ if (np < 0)
+ return np;
+ ret = mlx5r_umr_update_xlt(mr, start_idx, np, page_shift, xlt_flags);
+ }
+
mutex_unlock(&odp->umem_mutex);
if (ret < 0) {
--
2.34.1
Powered by blists - more mailing lists