lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1563968476-12785-4-git-send-email-linux.bhar@gmail.com>
Date:   Wed, 24 Jul 2019 17:11:16 +0530
From:   Bharath Vedartham <linux.bhar@...il.com>
To:     sivanich@....com, arnd@...db.de, jhubbard@...dia.com
Cc:     ira.weiny@...el.com, jglisse@...hat.com,
        gregkh@...uxfoundation.org, william.kucharski@...cle.com,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Bharath Vedartham <linux.bhar@...il.com>
Subject: [PATCH v2 3/3] sgi-gru: Use __get_user_pages_fast in atomic_pte_lookup

*pte_lookup functions get the physical address for a given virtual
address by getting a physical page using gup and use page_to_phys to get
the physical address.

Currently, atomic_pte_lookup manually walks the page tables. If this
function fails to get a physical page, it will fall back too
non_atomic_pte_lookup to get a physical page which uses the slow gup
path to get the physical page.

Instead of manually walking the page tables use __get_user_pages_fast
which does the same thing and it does not fall back to the slow gup
path.

Also, the function atomic_pte_lookup's return value has been changed to boolean.
The callsites have been appropriately modified.

This is largely inspired from kvm code. kvm uses __get_user_pages_fast
in hva_to_pfn_fast function which can run in an atomic context.

Cc: Ira Weiny <ira.weiny@...el.com>
Cc: John Hubbard <jhubbard@...dia.com>
Cc: Jérôme Glisse <jglisse@...hat.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Dimitri Sivanich <sivanich@....com>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: William Kucharski <william.kucharski@...cle.com>
Cc: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org
Reviewed-by: Ira Weiny <ira.weiny@...el.com>
Signed-off-by: Bharath Vedartham <linux.bhar@...il.com>
---
Changes since v2
	- Modified the return value of atomic_pte_lookup
	to use booleans rather than numeric values.
	This was suggested by John Hubbard.
---
 drivers/misc/sgi-gru/grufault.c | 56 +++++++++++------------------------------
 1 file changed, 15 insertions(+), 41 deletions(-)

diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
index bce47af..da2d2cc 100644
--- a/drivers/misc/sgi-gru/grufault.c
+++ b/drivers/misc/sgi-gru/grufault.c
@@ -193,9 +193,11 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma,
 }
 
 /*
- * atomic_pte_lookup
+ * atomic_pte_lookup() - Convert a user virtual address 
+ * to a physical address.
+ * @Return: true for success, false for failure. Failure means that
+ * the page could not be pinned via gup fast.
  *
- * Convert a user virtual address to a physical address
  * Only supports Intel large pages (2MB only) on x86_64.
  *	ZZZ - hugepage support is incomplete
  *
@@ -205,49 +207,20 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma,
 static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr,
 	int write, unsigned long *paddr, int *pageshift)
 {
-	pgd_t *pgdp;
-	p4d_t *p4dp;
-	pud_t *pudp;
-	pmd_t *pmdp;
-	pte_t pte;
-
-	pgdp = pgd_offset(vma->vm_mm, vaddr);
-	if (unlikely(pgd_none(*pgdp)))
-		goto err;
-
-	p4dp = p4d_offset(pgdp, vaddr);
-	if (unlikely(p4d_none(*p4dp)))
-		goto err;
-
-	pudp = pud_offset(p4dp, vaddr);
-	if (unlikely(pud_none(*pudp)))
-		goto err;
-
-	pmdp = pmd_offset(pudp, vaddr);
-	if (unlikely(pmd_none(*pmdp)))
-		goto err;
-#ifdef CONFIG_X86_64
-	if (unlikely(pmd_large(*pmdp)))
-		pte = *(pte_t *) pmdp;
-	else
-#endif
-		pte = *pte_offset_kernel(pmdp, vaddr);
-
-	if (unlikely(!pte_present(pte) ||
-		     (write && (!pte_write(pte) || !pte_dirty(pte)))))
-		return 1;
-
-	*paddr = pte_pfn(pte) << PAGE_SHIFT;
+	struct page *page;
 
 	if (unlikely(is_vm_hugetlb_page(vma)))
 		*pageshift = HPAGE_SHIFT;
 	else
 		*pageshift = PAGE_SHIFT;
 
-	return 0;
+	if (!__get_user_pages_fast(vaddr, 1, write, &page))
+		return false;
 
-err:
-	return 1;
+	*paddr = page_to_phys(page);
+	put_user_page(page);
+
+	return true;
 }
 
 static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
@@ -256,7 +229,8 @@ static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
 	struct mm_struct *mm = gts->ts_mm;
 	struct vm_area_struct *vma;
 	unsigned long paddr;
-	int ret, ps;
+	int ps;
+	bool success;
 
 	vma = find_vma(mm, vaddr);
 	if (!vma)
@@ -267,8 +241,8 @@ static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
 	 * context.
 	 */
 	rmb();	/* Must/check ms_range_active before loading PTEs */
-	ret = atomic_pte_lookup(vma, vaddr, write, &paddr, &ps);
-	if (ret) {
+	success = atomic_pte_lookup(vma, vaddr, write, &paddr, &ps);
+	if (!success) {
 		if (atomic)
 			goto upm;
 		if (non_atomic_pte_lookup(vma, vaddr, write, &paddr, &ps))
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ