lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Nov 2016 15:59:53 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Johannes Weiner <hannes@...xchg.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v3

On Wed 30-11-16 14:16:13, Mel Gorman wrote:
> On Wed, Nov 30, 2016 at 02:05:50PM +0100, Michal Hocko wrote:
[...]
> > But...  Unless I am missing something this effectively means that we do
> > not exercise high order atomic reserves. Shouldn't we fallback to
> > the locked __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC) for
> > order > 0 && ALLOC_HARDER ? Or is this just hidden in some other code
> > path which I am not seeing?
> > 
> 
> Good spot, would this be acceptable to you?

It's not a queen of beauty but it works. A more elegant solution would
require more surgery I guess which is probably not worth it at this
stage.

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 91dc68c2a717..94808f565f74 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2609,9 +2609,18 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
>  				int nr_pages = rmqueue_bulk(zone, order,
>  						pcp->batch, list,
>  						migratetype, cold);
> -				pcp->count += (nr_pages << order);
> -				if (unlikely(list_empty(list)))
> +				if (unlikely(list_empty(list))) {
> +					/*
> +					 * Retry high-order atomic allocs
> +					 * from the buddy list which may
> +					 * use MIGRATE_HIGHATOMIC.
> +					 */
> +					if (order && (alloc_flags & ALLOC_HARDER))
> +						goto try_buddylist;
> +
>  					goto failed;
> +				}
> +				pcp->count += (nr_pages << order);
>  			}
>  
>  			if (cold)
> @@ -2624,6 +2633,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
>  
>  		} while (check_new_pcp(page));
>  	} else {
> +try_buddylist:
>  		/*
>  		 * We most definitely don't want callers attempting to
>  		 * allocate greater than order-1 page units with __GFP_NOFAIL.
> -- 
> Mel Gorman
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ