lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Feb 2021 16:33:07 -0800
From:   Ira Weiny <ira.weiny@...el.com>
To:     Prathu Baronia <prathubaronia2011@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     linux-kernel@...r.kernel.org, chintan.pandya@...plus.com,
        Prathu Baronia <prathu.baronia@...plus.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v4 1/1] mm/highmem: Remove deprecated kmap_atomic

On Thu, Feb 04, 2021 at 01:02:53PM +0530, Prathu Baronia wrote:
> From: Ira Weiny <ira.weiny@...el.com>
> 
> kmap_atomic() is being deprecated in favor of kmap_local_page().
> 
> Replace the uses of kmap_atomic() within the highmem code.
> 
> On profiling clear_huge_page() using ftrace an improvement
> of 62% was observed on the below setup.
> 
> Setup:-
> Below data has been collected on Qualcomm's SM7250 SoC THP enabled
> (kernel v4.19.113) with only CPU-0(Cortex-A55) and CPU-7(Cortex-A76)
> switched on and set to max frequency, also DDR set to perf governor.
> 
> FTRACE Data:-
> 
> Base data:-
> Number of iterations: 48
> Mean of allocation time: 349.5 us
> std deviation: 74.5 us
> 
> v4 data:-
> Number of iterations: 48
> Mean of allocation time: 131 us
> std deviation: 32.7 us
> 
> The following simple userspace experiment to allocate
> 100MB(BUF_SZ) of pages and writing to it gave us a good insight,
> we observed an improvement of 42% in allocation and writing timings.
> -------------------------------------------------------------
> Test code snippet
> -------------------------------------------------------------
>       clock_start();
>       buf = malloc(BUF_SZ); /* Allocate 100 MB of memory */
> 
>         for(i=0; i < BUF_SZ_PAGES; i++)
>         {
>                 *((int *)(buf + (i*PAGE_SIZE))) = 1;
>         }
>       clock_end();
> -------------------------------------------------------------
> 
> Malloc test timings for 100MB anon allocation:-
> 
> Base data:-
> Number of iterations: 100
> Mean of allocation time: 31831 us
> std deviation: 4286 us
> 
> v4 data:-
> Number of iterations: 100
> Mean of allocation time: 18193 us
> std deviation: 4915 us
> 
> Signed-off-by: Ira Weiny <ira.weiny@...el.com>

This already has my signed off by so I'm not going to 'review'.  With Prathu's
testing information I hope this can land.

Andrew did you see this patch?

Thanks,
Ira

> Signed-off-by: Prathu Baronia <prathu.baronia@...plus.com>
> [Updated commit text with test data]
> ---
>  include/linux/highmem.h | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index d2c70d3772a3..9a202c7e4e26 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -146,9 +146,9 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
>  #ifndef clear_user_highpage
>  static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
>  {
> -	void *addr = kmap_atomic(page);
> +	void *addr = kmap_local_page(page);
>  	clear_user_page(addr, vaddr, page);
> -	kunmap_atomic(addr);
> +	kunmap_local(addr);
>  }
>  #endif
>  
> @@ -199,9 +199,9 @@ alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
>  
>  static inline void clear_highpage(struct page *page)
>  {
> -	void *kaddr = kmap_atomic(page);
> +	void *kaddr = kmap_local_page(page);
>  	clear_page(kaddr);
> -	kunmap_atomic(kaddr);
> +	kunmap_local(kaddr);
>  }
>  
>  /*
> @@ -216,7 +216,7 @@ static inline void zero_user_segments(struct page *page,
>  		unsigned start1, unsigned end1,
>  		unsigned start2, unsigned end2)
>  {
> -	void *kaddr = kmap_atomic(page);
> +	void *kaddr = kmap_local_page(page);
>  	unsigned int i;
>  
>  	BUG_ON(end1 > page_size(page) || end2 > page_size(page));
> @@ -227,7 +227,7 @@ static inline void zero_user_segments(struct page *page,
>  	if (end2 > start2)
>  		memset(kaddr + start2, 0, end2 - start2);
>  
> -	kunmap_atomic(kaddr);
> +	kunmap_local(kaddr);
>  	for (i = 0; i < compound_nr(page); i++)
>  		flush_dcache_page(page + i);
>  }
> @@ -252,11 +252,11 @@ static inline void copy_user_highpage(struct page *to, struct page *from,
>  {
>  	char *vfrom, *vto;
>  
> -	vfrom = kmap_atomic(from);
> -	vto = kmap_atomic(to);
> +	vfrom = kmap_local_page(from);
> +	vto = kmap_local_page(to);
>  	copy_user_page(vto, vfrom, vaddr, to);
> -	kunmap_atomic(vto);
> -	kunmap_atomic(vfrom);
> +	kunmap_local(vto);
> +	kunmap_local(vfrom);
>  }
>  
>  #endif
> @@ -267,11 +267,11 @@ static inline void copy_highpage(struct page *to, struct page *from)
>  {
>  	char *vfrom, *vto;
>  
> -	vfrom = kmap_atomic(from);
> -	vto = kmap_atomic(to);
> +	vfrom = kmap_local_page(from);
> +	vto = kmap_local_page(to);
>  	copy_page(vto, vfrom);
> -	kunmap_atomic(vto);
> -	kunmap_atomic(vfrom);
> +	kunmap_local(vto);
> +	kunmap_local(vfrom);
>  }
>  
>  #endif
> -- 
> 2.17.1
> 

Powered by blists - more mailing lists