[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <15223dd3-8018-65f0-dc0b-aef43945e54e@nvidia.com>
Date: Mon, 22 Jul 2019 16:06:09 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Bharath Vedartham <linux.bhar@...il.com>
CC: <arnd@...db.de>, <sivanich@....com>, <gregkh@...uxfoundation.org>,
<ira.weiny@...el.com>, <jglisse@...hat.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH 3/3] sgi-gru: Use __get_user_pages_fast in
atomic_pte_lookup
On 7/22/19 10:53 AM, Bharath Vedartham wrote:
> On Sun, Jul 21, 2019 at 07:32:36PM -0700, John Hubbard wrote:
>> On 7/21/19 8:58 AM, Bharath Vedartham wrote:
...
>> Also, optional: as long as you're there, atomic_pte_lookup() ought to
>> either return a bool (true == success) or an errno, rather than a
>> numeric zero or one.
> That makes sense. But the code which uses atomic_pte_lookup uses the
> return value of 1 for success and failure value of 0 in gru_vtop. That's
> why I did not mess with the return values in this code. It would require
> some change in the driver functionality which I am not ready to do :(
It's a static function with only one caller. You could just merge in
something like this, on top of what you have:
diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
index 121c9a4ccb94..2f768fc06432 100644
--- a/drivers/misc/sgi-gru/grufault.c
+++ b/drivers/misc/sgi-gru/grufault.c
@@ -189,10 +189,11 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma,
return 0;
}
-/*
- * atomic_pte_lookup
+/**
+ * atomic_pte_lookup() - Convert a user virtual address to a physical address
+ * @Return: true for success, false for failure. Failure means that the page
+ * could not be pinned via gup fast.
*
- * Convert a user virtual address to a physical address
* Only supports Intel large pages (2MB only) on x86_64.
* ZZZ - hugepage support is incomplete
*
@@ -207,12 +208,12 @@ static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr,
*pageshift = is_vm_hugetlb_page(vma) ? HPAGE_SHIFT : PAGE_SHIFT;
if (!__get_user_pages_fast(vaddr, 1, write, &page))
- return 1;
+ return false;
*paddr = page_to_phys(page);
put_user_page(page);
- return 0;
+ return true;
}
static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
@@ -221,7 +222,8 @@ static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
struct mm_struct *mm = gts->ts_mm;
struct vm_area_struct *vma;
unsigned long paddr;
- int ret, ps;
+ int ps;
+ bool success;
vma = find_vma(mm, vaddr);
if (!vma)
@@ -232,8 +234,8 @@ static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
* context.
*/
rmb(); /* Must/check ms_range_active before loading PTEs */
- ret = atomic_pte_lookup(vma, vaddr, write, &paddr, &ps);
- if (ret) {
+ success = atomic_pte_lookup(vma, vaddr, write, &paddr, &ps);
+ if (!success) {
if (atomic)
goto upm;
if (non_atomic_pte_lookup(vma, vaddr, write, &paddr, &ps))
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists