lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Aug 2017 13:05:15 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     js1304@...il.com, Andrew Morton <akpm@...ux-foundation.org>
Cc:     Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>,
        Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [PATCH] mm/mlock: use page_zone() instead of page_zone_id()

+CC Mel

On 08/24/2017 09:20 AM, js1304@...il.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@....com>
> 
> page_zone_id() is a specialized function to compare the zone for the pages
> that are within the section range. If the section of the pages are
> different, page_zone_id() can be different even if their zone is the same.
> This wrong usage doesn't cause any actual problem since
> __munlock_pagevec_fill() would be called again with failed index. However,
> it's better to use more appropriate function here.

Hmm using zone id was part of the series making munlock faster. Too bad
it's doing the wrong thing on some memory models. Looks like it wasn't
evaluated in isolation, but only as part of the pagevec usage (commit
7a8010cd36273) but most likely it wasn't contributing too much to the
14% speedup.

> This patch is also preparation for futher change about page_zone_id().

Out of curiosity, what kind of change?

Vlastimil

> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
> ---
>  mm/mlock.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/mlock.c b/mm/mlock.c
> index b562b55..dfc6f19 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -365,8 +365,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
>   * @start + PAGE_SIZE when no page could be added by the pte walk.
>   */
>  static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
> -		struct vm_area_struct *vma, int zoneid,	unsigned long start,
> -		unsigned long end)
> +			struct vm_area_struct *vma, struct zone *zone,
> +			unsigned long start, unsigned long end)
>  {
>  	pte_t *pte;
>  	spinlock_t *ptl;
> @@ -394,7 +394,7 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
>  		 * Break if page could not be obtained or the page's node+zone does not
>  		 * match
>  		 */
> -		if (!page || page_zone_id(page) != zoneid)
> +		if (!page || page_zone(page) != zone)
>  			break;
>  
>  		/*
> @@ -446,7 +446,6 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
>  		unsigned long page_increm;
>  		struct pagevec pvec;
>  		struct zone *zone;
> -		int zoneid;
>  
>  		pagevec_init(&pvec, 0);
>  		/*
> @@ -481,7 +480,6 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
>  				 */
>  				pagevec_add(&pvec, page);
>  				zone = page_zone(page);
> -				zoneid = page_zone_id(page);
>  
>  				/*
>  				 * Try to fill the rest of pagevec using fast
> @@ -490,7 +488,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
>  				 * pagevec.
>  				 */
>  				start = __munlock_pagevec_fill(&pvec, vma,
> -						zoneid, start, end);
> +						zone, start, end);
>  				__munlock_pagevec(&pvec, zone);
>  				goto next;
>  			}
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ