lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <006330a3-8df9-dcd5-8ad6-fc23765a1266@suse.cz>
Date:   Mon, 21 Feb 2022 10:43:30 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Mel Gorman <mgorman@...hsingularity.net>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Aaron Lu <aaron.lu@...el.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Michal Hocko <mhocko@...nel.org>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 1/1] mm/page_alloc: Do not prefetch buddies during bulk
 free

On 2/21/22 10:41, Mel Gorman wrote:
> free_pcppages_bulk() has taken two passes through the pcp lists since
> commit 0a5f4e5b4562 ("mm/free_pcppages_bulk: do not hold lock when picking
> pages to free") due to deferring the cost of selecting PCP lists until
> the zone lock is held.
> 
> As the list processing now takes place under the zone lock, it's less
> clear that this will always benefit for two reasons.
> 
> 1. There is a guaranteed cost to calculating the buddy which definitely
>    has to be calculated again. However, as the zone lock is held and
>    there is no deferring of buddy merging, there is no guarantee that the
>    prefetch will have completed when the second buddy calculation takes
>    place and buddies are being merged.  With or without the prefetch, there
>    may be further stalls depending on how many pages get merged. In other
>    words, a stall due to merging is inevitable and at best only one stall
>    might be avoided at the cost of calculating the buddy location twice.
> 
> 2. As the zone lock is held, prefetch_nr makes less sense as once
>    prefetch_nr expires, the cache lines of interest have already been
>    merged.
> 
> The main concern is that there is a definite cost to calculating the
> buddy location early for the prefetch and it is a "maybe win" depending
> on whether the CPU prefetch logic and memory is fast enough. Remove the
> prefetch logic on the basis that reduced instructions in a path is always
> a saving where as the prefetch might save one memory stall depending on
> the CPU and memory.
> 
> In most cases, this has marginal benefit as the calculations are a small
> part of the overall freeing of pages. However, it was detectable on at
> least one machine.
> 
>                               5.17.0-rc3             5.17.0-rc3
>                     mm-highpcplimit-v2r1     mm-noprefetch-v1r1
> Min       elapsed      630.00 (   0.00%)      610.00 (   3.17%)
> Amean     elapsed      639.00 (   0.00%)      623.00 *   2.50%*
> Max       elapsed      660.00 (   0.00%)      660.00 (   0.00%)
> 
> Suggested-by: Aaron Lu <aaron.lu@...el.com>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>

Reviewed-by: Vlastimil Babka <vbabka@...e.cz>

> ---
>  mm/page_alloc.c | 24 ------------------------
>  1 file changed, 24 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index de9f072d23bd..2d5cc098136d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1432,15 +1432,6 @@ static bool bulkfree_pcp_prepare(struct page *page)
>  }
>  #endif /* CONFIG_DEBUG_VM */
>  
> -static inline void prefetch_buddy(struct page *page, unsigned int order)
> -{
> -	unsigned long pfn = page_to_pfn(page);
> -	unsigned long buddy_pfn = __find_buddy_pfn(pfn, order);
> -	struct page *buddy = page + (buddy_pfn - pfn);
> -
> -	prefetch(buddy);
> -}
> -
>  /*
>   * Frees a number of pages from the PCP lists
>   * Assumes all pages on list are in same zone.
> @@ -1453,7 +1444,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  	int min_pindex = 0;
>  	int max_pindex = NR_PCP_LISTS - 1;
>  	unsigned int order;
> -	int prefetch_nr = READ_ONCE(pcp->batch);
>  	bool isolated_pageblocks;
>  	struct page *page;
>  
> @@ -1508,20 +1498,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  			if (bulkfree_pcp_prepare(page))
>  				continue;
>  
> -			/*
> -			 * We are going to put the page back to the global
> -			 * pool, prefetch its buddy to speed up later access
> -			 * under zone->lock. It is believed the overhead of
> -			 * an additional test and calculating buddy_pfn here
> -			 * can be offset by reduced memory latency later. To
> -			 * avoid excessive prefetching due to large count, only
> -			 * prefetch buddy for the first pcp->batch nr of pages.
> -			 */
> -			if (prefetch_nr) {
> -				prefetch_buddy(page, order);
> -				prefetch_nr--;
> -			}
> -
>  			/* MIGRATE_ISOLATE page should not go to pcplists */
>  			VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
>  			/* Pageblock could have been isolated meanwhile */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ