lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Nov 2016 14:05:50 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Johannes Weiner <hannes@...xchg.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v3

On Sun 27-11-16 13:19:54, Mel Gorman wrote:
[...]
> @@ -2588,18 +2594,22 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
>  	struct page *page;
>  	bool cold = ((gfp_flags & __GFP_COLD) != 0);
>  
> -	if (likely(order == 0)) {
> +	if (likely(order <= PAGE_ALLOC_COSTLY_ORDER)) {
>  		struct per_cpu_pages *pcp;
>  		struct list_head *list;
>  
>  		local_irq_save(flags);
>  		do {
> +			unsigned int pindex;
> +
> +			pindex = order_to_pindex(migratetype, order);
>  			pcp = &this_cpu_ptr(zone->pageset)->pcp;
> -			list = &pcp->lists[migratetype];
> +			list = &pcp->lists[pindex];
>  			if (list_empty(list)) {
> -				pcp->count += rmqueue_bulk(zone, 0,
> +				int nr_pages = rmqueue_bulk(zone, order,
>  						pcp->batch, list,
>  						migratetype, cold);
> +				pcp->count += (nr_pages << order);
>  				if (unlikely(list_empty(list)))
>  					goto failed;

just a nit, we can reorder the check and the count update because nobody
could have stolen pages allocated by rmqueue_bulk. I would also consider
nr_pages a bit misleading because we get a number or allocated elements.
Nothing to lose sleep over...

>  			}

But...  Unless I am missing something this effectively means that we do
not exercise high order atomic reserves. Shouldn't we fallback to
the locked __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC) for
order > 0 && ALLOC_HARDER ? Or is this just hidden in some other code
path which I am not seeing?

Other than that the patch looks reasonable to me. Keeping some portion
of !costly pages on pcp lists sounds useful from the fragmentation
point of view as well AFAICS because it would be normally dissolved for
order-0 requests while we push on the reclaim more right now.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists