lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 24 Jan 2017 10:19:11 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Arnd Bergmann <arnd@...db.de>
Cc:     Mel Gorman <mgorman@...hsingularity.net>,
        Michal Hocko <mhocko@...e.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: ensure alloc_flags in slow path are initialized

On 01/24/2017 12:56 AM, Andrew Morton wrote:
> On Mon, 23 Jan 2017 13:16:12 +0100 Arnd Bergmann <arnd@...db.de> wrote:
> 
>> The __alloc_pages_slowpath() has gotten rather complex and gcc
>> is no longer able to follow the gotos and prove that the
>> alloc_flags variable is initialized at the time it is used:
>>
>> mm/page_alloc.c: In function '__alloc_pages_slowpath':
>> mm/page_alloc.c:3565:15: error: 'alloc_flags' may be used uninitialized in this function [-Werror=maybe-uninitialized]
>>
>> To be honest, I can't figure that out either, maybe it is or
>> maybe not, but moving the existing initialization up a little
>> higher looks safe and makes it obvious to both me and gcc that
>> the initialization comes before the first use.
>>
>> ...
>>
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -3591,6 +3591,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>  				(__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
>>  		gfp_mask &= ~__GFP_ATOMIC;
>>  
>> +	/*
>> +	 * The fast path uses conservative alloc_flags to succeed only until
>> +	 * kswapd needs to be woken up, and to avoid the cost of setting up
>> +	 * alloc_flags precisely. So we do that now.
>> +	 */
>> +	alloc_flags = gfp_to_alloc_flags(gfp_mask);
>> +
>>  retry_cpuset:
>>  	compaction_retries = 0;
>>  	no_progress_loops = 0;
>> @@ -3607,14 +3614,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>  	if (!ac->preferred_zoneref->zone)
>>  		goto nopage;
>>  
>> -
>> -	/*
>> -	 * The fast path uses conservative alloc_flags to succeed only until
>> -	 * kswapd needs to be woken up, and to avoid the cost of setting up
>> -	 * alloc_flags precisely. So we do that now.
>> -	 */
>> -	alloc_flags = gfp_to_alloc_flags(gfp_mask);
>> -
>>  	if (gfp_mask & __GFP_KSWAPD_RECLAIM)
>>  		wake_all_kswapds(order, ac);
> 
> hm.  But we later do
> 
> 	if (gfp_pfmemalloc_allowed(gfp_mask))
> 		alloc_flags = ALLOC_NO_WATERMARKS;
> 
> 	...
> 	if (read_mems_allowed_retry(cpuset_mems_cookie))
> 		goto retry_cpuset;
> 
> so with your patch there's a path where we can rerun everything with
> alloc_flags == ALLOC_NO_WATERMARKS.  That's changed behaviour.

Right.

> When I saw the test robot warning I did this, which I think preserves
> behaviour?

Yes, that's cleaner. Thanks.

> --- a/mm/page_alloc.c~mm-consolidate-gfp_nofail-checks-in-the-allocator-slowpath-fix
> +++ a/mm/page_alloc.c
> @@ -3577,6 +3577,14 @@ retry_cpuset:
>  	no_progress_loops = 0;
>  	compact_priority = DEF_COMPACT_PRIORITY;
>  	cpuset_mems_cookie = read_mems_allowed_begin();
> +
> +	/*
> +	 * The fast path uses conservative alloc_flags to succeed only until
> +	 * kswapd needs to be woken up, and to avoid the cost of setting up
> +	 * alloc_flags precisely. So we do that now.
> +	 */
> +	alloc_flags = gfp_to_alloc_flags(gfp_mask);
> +
>  	/*
>  	 * We need to recalculate the starting point for the zonelist iterator
>  	 * because we might have used different nodemask in the fast path, or
> @@ -3588,14 +3596,6 @@ retry_cpuset:
>  	if (!ac->preferred_zoneref->zone)
>  		goto nopage;
>  
> -
> -	/*
> -	 * The fast path uses conservative alloc_flags to succeed only until
> -	 * kswapd needs to be woken up, and to avoid the cost of setting up
> -	 * alloc_flags precisely. So we do that now.
> -	 */
> -	alloc_flags = gfp_to_alloc_flags(gfp_mask);
> -
>  	if (gfp_mask & __GFP_KSWAPD_RECLAIM)
>  		wake_all_kswapds(order, ac);
>  
> _
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ