lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200928165215.f46924bfff9a109131048f81@linux-foundation.org>
Date:   Mon, 28 Sep 2020 16:52:15 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     js1304@...il.com
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Michal Hocko <mhocko@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
        Mel Gorman <mgorman@...hsingularity.net>, kernel-team@....com,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v2 for v5.9] mm/page_alloc: handle a missing case for
 memalloc_nocma_{save/restore} APIs

On Mon, 28 Sep 2020 17:50:46 +0900 js1304@...il.com wrote:

> From: Joonsoo Kim <iamjoonsoo.kim@....com>
> 
> memalloc_nocma_{save/restore} APIs can be used to skip page allocation
> on CMA area, but, there is a missing case and the page on CMA area could
> be allocated even if APIs are used. This patch handles this case to fix
> the potential issue.
> 
> Missing case is an allocation from the pcplist. MIGRATE_MOVABLE pcplist
> could have the pages on CMA area so we need to skip it if ALLOC_CMA isn't
> specified.

Changelog doesn't describe the end-user visible effects of the bug. 
Please send this description?

> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3367,9 +3367,16 @@ struct page *rmqueue(struct zone *preferred_zone,
>  	struct page *page;
>  
>  	if (likely(order == 0)) {
> -		page = rmqueue_pcplist(preferred_zone, zone, gfp_flags,
> +		/*
> +		 * MIGRATE_MOVABLE pcplist could have the pages on CMA area and
> +		 * we need to skip it when CMA area isn't allowed.
> +		 */
> +		if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA ||
> +				migratetype != MIGRATE_MOVABLE) {
> +			page = rmqueue_pcplist(preferred_zone, zone, gfp_flags,
>  					migratetype, alloc_flags);
> -		goto out;
> +			goto out;
> +		}
>  	}
>  
>  	/*

We still really really don't want to be adding overhead to the page
allocation hotpath for a really really obscure feature from a single
callsite.

Do we have an understanding of how many people's kernels are enabling
CONFIG_CMA?

I previously suggested retrying the allocation in
__gup_longterm_locked() but you said "it cannot ensure that we
eventually get the non-CMA page".  Please explain why?

What about manually emptying the pcplists beforehand? 

Or byassing the pcplists for this caller and calling __rmqueue() directly?

> @@ -3381,7 +3388,7 @@ struct page *rmqueue(struct zone *preferred_zone,
>  
>  	do {
>  		page = NULL;
> -		if (alloc_flags & ALLOC_HARDER) {
> +		if (order > 0 && alloc_flags & ALLOC_HARDER) {
>  			page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
>  			if (page)
>  				trace_mm_page_alloc_zone_locked(page, order, migratetype);

What does this hunk do?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ