[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200519051850.2845561-1-jhubbard@nvidia.com>
Date: Mon, 18 May 2020 22:18:50 -0700
From: John Hubbard <jhubbard@...dia.com>
To: LKML <linux-kernel@...r.kernel.org>
CC: John Hubbard <jhubbard@...dia.com>,
Jens Wiklander <jens.wiklander@...aro.org>,
Sumit Semwal <sumit.semwal@...aro.org>,
<tee-dev@...ts.linaro.org>, <linux-media@...r.kernel.org>,
<dri-devel@...ts.freedesktop.org>, <linaro-mm-sig@...ts.linaro.org>
Subject: [PATCH] tee: convert convert get_user_pages() --> pin_user_pages()
This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.
There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.
[1] Documentation/core-api/pin_user_pages.rst
[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/
Cc: Jens Wiklander <jens.wiklander@...aro.org>
Cc: Sumit Semwal <sumit.semwal@...aro.org>
Cc: tee-dev@...ts.linaro.org
Cc: linux-media@...r.kernel.org
Cc: dri-devel@...ts.freedesktop.org
Cc: linaro-mm-sig@...ts.linaro.org
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
Note that I have only compile-tested this patch, although that does
also include cross-compiling for a few other arches.
thanks,
John Hubbard
NVIDIA
drivers/tee/tee_shm.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
index bd679b72bd05..7dffc42d8d5a 100644
--- a/drivers/tee/tee_shm.c
+++ b/drivers/tee/tee_shm.c
@@ -31,16 +31,13 @@ static void tee_shm_release(struct tee_shm *shm)
poolm->ops->free(poolm, shm);
} else if (shm->flags & TEE_SHM_REGISTER) {
- size_t n;
int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
if (rc)
dev_err(teedev->dev.parent,
"unregister shm %p failed: %d", shm, rc);
- for (n = 0; n < shm->num_pages; n++)
- put_page(shm->pages[n]);
-
+ unpin_user_pages(shm->pages, shm->num_pages);
kfree(shm->pages);
}
@@ -226,7 +223,7 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
goto err;
}
- rc = get_user_pages_fast(start, num_pages, FOLL_WRITE, shm->pages);
+ rc = pin_user_pages_fast(start, num_pages, FOLL_WRITE, shm->pages);
if (rc > 0)
shm->num_pages = rc;
if (rc != num_pages) {
@@ -271,16 +268,13 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
return shm;
err:
if (shm) {
- size_t n;
-
if (shm->id >= 0) {
mutex_lock(&teedev->mutex);
idr_remove(&teedev->idr, shm->id);
mutex_unlock(&teedev->mutex);
}
if (shm->pages) {
- for (n = 0; n < shm->num_pages; n++)
- put_page(shm->pages[n]);
+ unpin_user_pages(shm->pages, shm->num_pages);
kfree(shm->pages);
}
}
--
2.26.2
Powered by blists - more mailing lists