lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Aug 2022 07:46:15 +0000
From:   Li Zhijian <lizhijian@...itsu.com>
To:     Jason Gunthorpe <jgg@...pe.ca>, Zhu Yanjun <zyjzyj2000@...il.com>,
        "Leon Romanovsky" <leon@...nel.org>, <linux-rdma@...r.kernel.org>
CC:     Xiao Yang <yangx.jy@...itsu.com>, <y-goto@...itsu.com>,
        Bob Pearson <rpearsonhpe@...il.com>,
        Mark Bloch <mbloch@...dia.com>,
        Aharon Landau <aharonl@...dia.com>,
        Tom Talpey <tom@...pey.com>, <tomasz.gromadzki@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        <linux-kernel@...r.kernel.org>, Li Zhijian <lizhijian@...itsu.com>
Subject: [PATCH v4 2/6] RDMA/rxe: Allow registering persistent flag for pmem MR only

Memory region could at most support 2 access flags:
IB_ACCESS_FLUSH_PERSISTENT and IB_ACCESS_FLUSH_GLOBAL_VISIBILITY

But we only allow user to register persistent flush flags to the pmem MR
that supports the ability of persisting data across power cycles.

So register a persistent access flag to a non-pmem MR will be rejected
by kernel.

Signed-off-by: Li Zhijian <lizhijian@...itsu.com>
---
v2: update commit message, get rid of confusing ib_check_flush_access_flags() # Tom
---
 drivers/infiniband/sw/rxe/rxe_mr.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 9e3e1a18f2dd..24ca014cdecd 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -113,6 +113,13 @@ void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr)
 	mr->type = IB_MR_TYPE_DMA;
 }
 
+static bool vaddr_in_pmem(char *vaddr)
+{
+	return REGION_INTERSECTS ==
+	       region_intersects(virt_to_phys(vaddr), 1, IORESOURCE_MEM,
+				 IORES_DESC_PERSISTENT_MEMORY);
+}
+
 int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
 		     int access, struct rxe_mr *mr)
 {
@@ -123,6 +130,7 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
 	int			num_buf;
 	void			*vaddr;
 	int err;
+	bool first = true, is_pmem = false;
 	int i;
 
 	umem = ib_umem_get(pd->ibpd.device, start, length, access);
@@ -167,6 +175,11 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
 				goto err_cleanup_map;
 			}
 
+			if (first) {
+				first = false;
+				is_pmem = vaddr_in_pmem(vaddr);
+			}
+
 			buf->addr = (uintptr_t)vaddr;
 			buf->size = PAGE_SIZE;
 			num_buf++;
@@ -175,6 +188,12 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
 		}
 	}
 
+	if (!is_pmem && access & IB_ACCESS_FLUSH_PERSISTENT) {
+		pr_warn("Cannot register IB_ACCESS_FLUSH_PERSISTENT for non-pmem memory\n");
+		err = -EINVAL;
+		goto err_release_umem;
+	}
+
 	mr->ibmr.pd = &pd->ibpd;
 	mr->umem = umem;
 	mr->access = access;
-- 
2.31.1

Powered by blists - more mailing lists