[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210622165622.2638628-1-ira.weiny@intel.com>
Date: Tue, 22 Jun 2021 09:56:22 -0700
From: ira.weiny@...el.com
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Ira Weiny <ira.weiny@...el.com>,
Mike Marciniszyn <mike.marciniszyn@...nelisnetworks.com>,
Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>,
Doug Ledford <dledford@...hat.com>,
Faisal Latif <faisal.latif@...el.com>,
Shiraz Saleem <shiraz.saleem@...el.com>,
Bernard Metzler <bmt@...ich.ibm.com>,
Kamal Heib <kheib@...hat.com>, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH V2] RDMA/irdma: Remove use of kmap()
From: Ira Weiny <ira.weiny@...el.com>
kmap() is being deprecated and will break uses of device dax after PKS
protection is introduced.[1]
The kmap() used in the irdma CM driver is thread local. Therefore
kmap_local_page() is sufficient to use and may provide performance benefits
as well. kmap_local_page() will work with device dax and pgmap
protected pages.
Use kmap_local_page() instead of kmap().
[1] https://lore.kernel.org/lkml/20201009195033.3208459-59-ira.weiny@intel.com/
Signed-off-by: Ira Weiny <ira.weiny@...el.com>
---
Changes for V2:
Move to the new irdma driver for 5.14
---
drivers/infiniband/hw/irdma/cm.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
index 3d2bdb033a54..6b62299abfbb 100644
--- a/drivers/infiniband/hw/irdma/cm.c
+++ b/drivers/infiniband/hw/irdma/cm.c
@@ -3675,14 +3675,14 @@ int irdma_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
ibmr->device = iwpd->ibpd.device;
iwqp->lsmm_mr = ibmr;
if (iwqp->page)
- iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page);
+ iwqp->sc_qp.qp_uk.sq_base = kmap_local_page(iwqp->page);
cm_node->lsmm_size = accept.size + conn_param->private_data_len;
irdma_sc_send_lsmm(&iwqp->sc_qp, iwqp->ietf_mem.va, cm_node->lsmm_size,
ibmr->lkey);
if (iwqp->page)
- kunmap(iwqp->page);
+ kunmap_local(iwqp->sc_qp.qp_uk.sq_base);
iwqp->cm_id = cm_id;
cm_node->cm_id = cm_id;
@@ -4093,10 +4093,10 @@ static void irdma_cm_event_connected(struct irdma_cm_event *event)
irdma_cm_init_tsa_conn(iwqp, cm_node);
read0 = (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO);
if (iwqp->page)
- iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page);
+ iwqp->sc_qp.qp_uk.sq_base = kmap_local_page(iwqp->page);
irdma_sc_send_rtt(&iwqp->sc_qp, read0);
if (iwqp->page)
- kunmap(iwqp->page);
+ kunmap_local(iwqp->sc_qp.qp_uk.sq_base);
attr.qp_state = IB_QPS_RTS;
cm_node->qhash_set = false;
--
2.28.0.rc0.12.gb6a658bd00c9
Powered by blists - more mailing lists