lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200616124357.GG9499@dhcp22.suse.cz>
Date:   Tue, 16 Jun 2020 14:43:57 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     David Hildenbrand <david@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        stable@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Minchan Kim <minchan@...nel.org>,
        Huang Ying <ying.huang@...el.com>,
        Wei Yang <richard.weiyang@...il.com>,
        Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [PATCH v1 1/3] mm/shuffle: don't move pages between zones and
 don't read garbage memmaps

On Tue 16-06-20 13:52:11, David Hildenbrand wrote:
> Especially with memory hotplug, we can have offline sections (with a
> garbage memmap) and overlapping zones. We have to make sure to only
> touch initialized memmaps (online sections managed by the buddy) and that
> the zone matches, to not move pages between zones.
> 
> To test if this can actually happen, I added a simple
> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
> onlining the first memory block "online_movable" and the second memory
> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
> and MOVABLE) overlap.
> 
> This might result in all kinds of weird situations (e.g., double
> allocations, list corruptions, unmovable allocations ending up in the
> movable zone).
> 
> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
> Cc: stable@...r.kernel.org # v5.2+
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Minchan Kim <minchan@...nel.org>
> Cc: Huang Ying <ying.huang@...el.com>
> Cc: Wei Yang <richard.weiyang@...il.com>
> Cc: Mel Gorman <mgorman@...hsingularity.net>
> Signed-off-by: David Hildenbrand <david@...hat.com>

Acked-by: Michal Hocko <mhocko@...e.com>

Thanks!

> ---
>  mm/shuffle.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/shuffle.c b/mm/shuffle.c
> index 44406d9977c77..dd13ab851b3ee 100644
> --- a/mm/shuffle.c
> +++ b/mm/shuffle.c
> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>   * For two pages to be swapped in the shuffle, they must be free (on a
>   * 'free_area' lru), have the same order, and have the same migratetype.
>   */
> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
> +						  unsigned long pfn, int order)
>  {
> -	struct page *page;
> +	struct page *page = pfn_to_online_page(pfn);
>  
>  	/*
>  	 * Given we're dealing with randomly selected pfns in a zone we
>  	 * need to ask questions like...
>  	 */
>  
> -	/* ...is the pfn even in the memmap? */
> -	if (!pfn_valid_within(pfn))
> +	/* ... is the page managed by the buddy? */
> +	if (!page)
>  		return NULL;
>  
> -	/* ...is the pfn in a present section or a hole? */
> -	if (!pfn_in_present_section(pfn))
> +	/* ... is the page assigned to the same zone? */
> +	if (page_zone(page) != zone)
>  		return NULL;
>  
>  	/* ...is the page free and currently on a free_area list? */
> -	page = pfn_to_page(pfn);
>  	if (!PageBuddy(page))
>  		return NULL;
>  
> @@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z)
>  		 * page_j randomly selected in the span @zone_start_pfn to
>  		 * @spanned_pages.
>  		 */
> -		page_i = shuffle_valid_page(i, order);
> +		page_i = shuffle_valid_page(z, i, order);
>  		if (!page_i)
>  			continue;
>  
> @@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z)
>  			j = z->zone_start_pfn +
>  				ALIGN_DOWN(get_random_long() % z->spanned_pages,
>  						order_pages);
> -			page_j = shuffle_valid_page(j, order);
> +			page_j = shuffle_valid_page(z, j, order);
>  			if (page_j && page_j != page_i)
>  				break;
>  		}
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ