lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 1 Jun 2016 15:26:43 +0200
From:	Michal Hocko <mhocko@...nel.org>
To:	Vlastimil Babka <vbabka@...e.cz>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Mel Gorman <mgorman@...hsingularity.net>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	David Rientjes <rientjes@...gle.com>,
	Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH v2 03/18] mm, page_alloc: don't retry initial attempt in
 slowpath

On Tue 31-05-16 15:08:03, Vlastimil Babka wrote:
[...]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index da3a62a94b4a..9f83259a18a8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3367,10 +3367,9 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  	bool drained = false;
>  
>  	*did_some_progress = __perform_reclaim(gfp_mask, order, ac);
> -	if (unlikely(!(*did_some_progress)))
> -		return NULL;
>  
>  retry:
> +	/* We attempt even when no progress, as kswapd might have done some */
>  	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);

Is this really likely to happen, though? Sure we might have last few
reclaimable pages on the LRU lists but I am not sure this would make a
large difference then.

That being said, I do not think this is harmful but I find it a bit
weird to invoke a reclaim and then ignore the feedback... Will leave the
decision up to you but the original patch seemed neater.

>  
>  	/*
> @@ -3378,7 +3377,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  	 * pages are pinned on the per-cpu lists or in high alloc reserves.
>  	 * Shrink them them and try again
>  	 */
> -	if (!page && !drained) {
> +	if (!page && *did_some_progress && !drained) {
>  		unreserve_highatomic_pageblock(ac);
>  		drain_all_pages(NULL);
>  		drained = true;

I do not remember this in the previous version. Why shouldn't we
unreserve highatomic reserves when there was no progress?

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ