lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c508330d-a5d0-fba3-9dd0-eb820a96ee09@nvidia.com>
Date:   Sun, 21 Jul 2019 19:32:36 -0700
From:   John Hubbard <jhubbard@...dia.com>
To:     Bharath Vedartham <linux.bhar@...il.com>, <arnd@...db.de>,
        <sivanich@....com>, <gregkh@...uxfoundation.org>
CC:     <ira.weiny@...el.com>, <jglisse@...hat.com>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH 3/3] sgi-gru: Use __get_user_pages_fast in
 atomic_pte_lookup

On 7/21/19 8:58 AM, Bharath Vedartham wrote:
> *pte_lookup functions get the physical address for a given virtual
> address by getting a physical page using gup and use page_to_phys to get
> the physical address.
> 
> Currently, atomic_pte_lookup manually walks the page tables. If this
> function fails to get a physical page, it will fall back too
> non_atomic_pte_lookup to get a physical page which uses the slow gup
> path to get the physical page.
> 
> Instead of manually walking the page tables use __get_user_pages_fast
> which does the same thing and it does not fall back to the slow gup
> path.
> 
> This is largely inspired from kvm code. kvm uses __get_user_pages_fast
> in hva_to_pfn_fast function which can run in an atomic context.
> 
> Cc: Ira Weiny <ira.weiny@...el.com>
> Cc: John Hubbard <jhubbard@...dia.com>
> Cc: Jérôme Glisse <jglisse@...hat.com>
> Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> Cc: Dimitri Sivanich <sivanich@....com>
> Cc: Arnd Bergmann <arnd@...db.de>
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-mm@...ck.org
> Signed-off-by: Bharath Vedartham <linux.bhar@...il.com>
> ---
>  drivers/misc/sgi-gru/grufault.c | 39 +++++----------------------------------
>  1 file changed, 5 insertions(+), 34 deletions(-)
> 
> diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
> index 75108d2..121c9a4 100644
> --- a/drivers/misc/sgi-gru/grufault.c
> +++ b/drivers/misc/sgi-gru/grufault.c
> @@ -202,46 +202,17 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma,
>  static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr,
>  	int write, unsigned long *paddr, int *pageshift)
>  {
> -	pgd_t *pgdp;
> -	p4d_t *p4dp;
> -	pud_t *pudp;
> -	pmd_t *pmdp;
> -	pte_t pte;
> -
> -	pgdp = pgd_offset(vma->vm_mm, vaddr);
> -	if (unlikely(pgd_none(*pgdp)))
> -		goto err;
> -
> -	p4dp = p4d_offset(pgdp, vaddr);
> -	if (unlikely(p4d_none(*p4dp)))
> -		goto err;
> -
> -	pudp = pud_offset(p4dp, vaddr);
> -	if (unlikely(pud_none(*pudp)))
> -		goto err;
> +	struct page *page;
>  
> -	pmdp = pmd_offset(pudp, vaddr);
> -	if (unlikely(pmd_none(*pmdp)))
> -		goto err;
> -#ifdef CONFIG_X86_64
> -	if (unlikely(pmd_large(*pmdp)))
> -		pte = *(pte_t *) pmdp;
> -	else
> -#endif
> -		pte = *pte_offset_kernel(pmdp, vaddr);
> +	*pageshift = is_vm_hugetlb_page(vma) ? HPAGE_SHIFT : PAGE_SHIFT;
>  
> -	if (unlikely(!pte_present(pte) ||
> -		     (write && (!pte_write(pte) || !pte_dirty(pte)))))
> +	if (!__get_user_pages_fast(vaddr, 1, write, &page))
>  		return 1;

Let's please use numeric, not boolean comparison, for the return value of 
gup.

Also, optional: as long as you're there, atomic_pte_lookup() ought to
either return a bool (true == success) or an errno, rather than a
numeric zero or one.

Other than that, this looks like a good cleanup, I wonder how many
open-coded gup implementations are floating around like this. 

thanks,
-- 
John Hubbard
NVIDIA

>  
> -	*paddr = pte_pfn(pte) << PAGE_SHIFT;
> -
> -	*pageshift = is_vm_hugetlb_page(vma) ? HPAGE_SHIFT : PAGE_SHIFT;
> +	*paddr = page_to_phys(page);
> +	put_user_page(page);
>  
>  	return 0;
> -
> -err:
> -	return 1;
>  }
>  
>  static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ