[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200517235620.205225-3-jhubbard@nvidia.com>
Date: Sun, 17 May 2020 16:56:20 -0700
From: John Hubbard <jhubbard@...dia.com>
To: LKML <linux-kernel@...r.kernel.org>
CC: John Hubbard <jhubbard@...dia.com>,
Matt Porter <mporter@...nel.crashing.org>,
Alexandre Bounine <alex.bou9@...il.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Dan Carpenter <dan.carpenter@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
<linux-media@...r.kernel.org>
Subject: [PATCH 2/2] rapidio: convert get_user_pages() --> pin_user_pages()
This code was using get_user_pages_fast(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages_fast() + put_page() calls to
pin_user_pages_fast() + unpin_user_pages() calls.
There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.
[1] Documentation/core-api/pin_user_pages.rst
[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/
Cc: Matt Porter <mporter@...nel.crashing.org>
Cc: Alexandre Bounine <alex.bou9@...il.com>
Cc: Sumit Semwal <sumit.semwal@...aro.org>
Cc: Dan Carpenter <dan.carpenter@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-media@...r.kernel.org
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
drivers/rapidio/devices/rio_mport_cdev.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
index 10af330153b5..0ddd94d6f1e9 100644
--- a/drivers/rapidio/devices/rio_mport_cdev.c
+++ b/drivers/rapidio/devices/rio_mport_cdev.c
@@ -572,14 +572,12 @@ static void dma_req_free(struct kref *ref)
struct mport_dma_req *req = container_of(ref, struct mport_dma_req,
refcount);
struct mport_cdev_priv *priv = req->priv;
- unsigned int i;
dma_unmap_sg(req->dmach->device->dev,
req->sgt.sgl, req->sgt.nents, req->dir);
sg_free_table(&req->sgt);
if (req->page_list) {
- for (i = 0; i < req->nr_pages; i++)
- put_page(req->page_list[i]);
+ unpin_user_pages(req->page_list, req->nr_pages);
kfree(req->page_list);
}
@@ -815,7 +813,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
struct mport_dma_req *req;
struct mport_dev *md = priv->md;
struct dma_chan *chan;
- int i, ret;
+ int ret;
int nents;
if (xfer->length == 0)
@@ -862,7 +860,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
goto err_req;
}
- pinned = get_user_pages_fast(
+ pinned = pin_user_pages_fast(
(unsigned long)xfer->loc_addr & PAGE_MASK,
nr_pages,
dir == DMA_FROM_DEVICE ? FOLL_WRITE : 0,
@@ -870,7 +868,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
if (pinned != nr_pages) {
if (pinned < 0) {
- rmcd_error("get_user_pages_unlocked err=%ld",
+ rmcd_error("pin_user_pages_fast err=%ld",
pinned);
nr_pages = 0;
} else
@@ -951,8 +949,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
err_pg:
if (!req->page_list) {
- for (i = 0; i < nr_pages; i++)
- put_page(page_list[i]);
+ unpin_user_pages(page_list, nr_pages);
kfree(page_list);
}
err_req:
--
2.26.2
Powered by blists - more mailing lists