lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 21 Apr 2009 10:29:12 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc:	Linux Memory Management List <linux-mm@...ck.org>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Nick Piggin <npiggin@...e.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Lin Ming <ming.m.lin@...el.com>,
	Zhang Yanmin <yanmin_zhang@...ux.intel.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 05/25] Break up the allocator entry point into fast and
	slow paths

On Tue, Apr 21, 2009 at 03:35:11PM +0900, KOSAKI Motohiro wrote:
> > The core of the page allocator is one giant function which allocates memory
> > on the stack and makes calculations that may not be needed for every
> > allocation. This patch breaks up the allocator path into fast and slow
> > paths for clarity. Note the slow paths are still inlined but the entry is
> > marked unlikely.  If they were not inlined, it actally increases text size
> > to generate the as there is only one call site.
> 
> hmm..
> 
> this patch have few behavior change.
> please separate big cleanup patch and behavior patch.
> 

The change in behavior is unintentional.

> I hope to make this patch non functional change. I'm not sure about these
> are your intentional change or not. it cause harder reviewing...
> 

Agreed, this should not make functional changes.

> 
> > 
> > Signed-off-by: Mel Gorman <mel@....ul.ie>
> > Reviewed-by: Christoph Lameter <cl@...ux-foundation.org>
> > ---
> >  mm/page_alloc.c |  356 ++++++++++++++++++++++++++++++++++---------------------
> >  1 files changed, 222 insertions(+), 134 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 3bed856..13b4d11 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1460,47 +1460,172 @@ try_next_zone:
> >  	return page;
> >  }
> >  
> > -/*
> > - * This is the 'heart' of the zoned buddy allocator.
> > - */
> > -struct page *
> > -__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
> > -			struct zonelist *zonelist, nodemask_t *nodemask)
> > +static inline int
> > +should_alloc_retry(gfp_t gfp_mask, unsigned int order,
> > +				unsigned long pages_reclaimed)
> >  {
> > -	const gfp_t wait = gfp_mask & __GFP_WAIT;
> > -	enum zone_type high_zoneidx = gfp_zone(gfp_mask);
> > -	struct zoneref *z;
> > -	struct zone *zone;
> > -	struct page *page;
> > -	struct reclaim_state reclaim_state;
> > -	struct task_struct *p = current;
> > -	int do_retry;
> > -	int alloc_flags;
> > -	unsigned long did_some_progress;
> > -	unsigned long pages_reclaimed = 0;
> > +	/* Do not loop if specifically requested */
> > +	if (gfp_mask & __GFP_NORETRY)
> > +		return 0;
> >  
> > -	lockdep_trace_alloc(gfp_mask);
> > +	/*
> > +	 * In this implementation, order <= PAGE_ALLOC_COSTLY_ORDER
> > +	 * means __GFP_NOFAIL, but that may not be true in other
> > +	 * implementations.
> > +	 */
> > +	if (order <= PAGE_ALLOC_COSTLY_ORDER)
> > +		return 1;
> > +
> > +	/*
> > +	 * For order > PAGE_ALLOC_COSTLY_ORDER, if __GFP_REPEAT is
> > +	 * specified, then we retry until we no longer reclaim any pages
> > +	 * (above), or we've reclaimed an order of pages at least as
> > +	 * large as the allocation's order. In both cases, if the
> > +	 * allocation still fails, we stop retrying.
> > +	 */
> > +	if (gfp_mask & __GFP_REPEAT && pages_reclaimed < (1 << order))
> > +		return 1;
> >  
> > -	might_sleep_if(wait);
> > +	/*
> > +	 * Don't let big-order allocations loop unless the caller
> > +	 * explicitly requests that.
> > +	 */
> > +	if (gfp_mask & __GFP_NOFAIL)
> > +		return 1;
> >  
> > -	if (should_fail_alloc_page(gfp_mask, order))
> > -		return NULL;
> > +	return 0;
> > +}
> >  
> > -	/* the list of zones suitable for gfp_mask */
> > -	z = zonelist->_zonerefs;
> > -	if (unlikely(!z->zone)) {
> > -		/*
> > -		 * Happens if we have an empty zonelist as a result of
> > -		 * GFP_THISNODE being used on a memoryless node
> > -		 */
> > +static inline struct page *
> > +__alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
> > +	struct zonelist *zonelist, enum zone_type high_zoneidx,
> > +	nodemask_t *nodemask)
> > +{
> > +	struct page *page;
> > +
> > +	/* Acquire the OOM killer lock for the zones in zonelist */
> > +	if (!try_set_zone_oom(zonelist, gfp_mask)) {
> > +		schedule_timeout_uninterruptible(1);
> >  		return NULL;
> >  	}
> >  
> > -restart:
> > -	page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask, order,
> > -			zonelist, high_zoneidx, ALLOC_WMARK_LOW|ALLOC_CPUSET);
> > +	/*
> > +	 * Go through the zonelist yet one more time, keep very high watermark
> > +	 * here, this is only to catch a parallel oom killing, we must fail if
> > +	 * we're still under heavy pressure.
> > +	 */
> > +	page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask,
> > +		order, zonelist, high_zoneidx,
> > +		ALLOC_WMARK_HIGH|ALLOC_CPUSET);
> >  	if (page)
> > -		goto got_pg;
> > +		goto out;
> > +
> > +	/* The OOM killer will not help higher order allocs */
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER)
> > +		goto out;
> > +
> > +	/* Exhausted what can be done so it's blamo time */
> > +	out_of_memory(zonelist, gfp_mask, order);
> > +
> > +out:
> > +	clear_zonelist_oom(zonelist, gfp_mask);
> > +	return page;
> > +}
> > +
> > +/* The really slow allocator path where we enter direct reclaim */
> > +static inline struct page *
> > +__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> > +	struct zonelist *zonelist, enum zone_type high_zoneidx,
> > +	nodemask_t *nodemask, int alloc_flags, unsigned long *did_some_progress)
> > +{
> > +	struct page *page = NULL;
> > +	struct reclaim_state reclaim_state;
> > +	struct task_struct *p = current;
> > +
> > +	cond_resched();
> > +
> > +	/* We now go into synchronous reclaim */
> > +	cpuset_memory_pressure_bump();
> > +
> > +	/*
> > +	 * The task's cpuset might have expanded its set of allowable nodes
> > +	 */
> > +	p->flags |= PF_MEMALLOC;
> > +	lockdep_set_current_reclaim_state(gfp_mask);
> > +	reclaim_state.reclaimed_slab = 0;
> > +	p->reclaim_state = &reclaim_state;
> > +
> > +	*did_some_progress = try_to_free_pages(zonelist, order, gfp_mask, nodemask);
> > +
> > +	p->reclaim_state = NULL;
> > +	lockdep_clear_current_reclaim_state();
> > +	p->flags &= ~PF_MEMALLOC;
> > +
> > +	cond_resched();
> > +
> > +	if (order != 0)
> > +		drain_all_pages();
> > +
> > +	if (likely(*did_some_progress))
> > +		page = get_page_from_freelist(gfp_mask, nodemask, order,
> > +					zonelist, high_zoneidx, alloc_flags);
> > +	return page;
> > +}
> > +
> > +static inline int
> > +is_allocation_high_priority(struct task_struct *p, gfp_t gfp_mask)
> > +{
> > +	if (((p->flags & PF_MEMALLOC) || unlikely(test_thread_flag(TIF_MEMDIE)))
> > +			&& !in_interrupt())
> > +		if (!(gfp_mask & __GFP_NOMEMALLOC))
> > +			return 1;
> > +	return 0;
> > +}
> > +
> > +/*
> > + * This is called in the allocator slow-path if the allocation request is of
> > + * sufficient urgency to ignore watermarks and take other desperate measures
> > + */
> > +static inline struct page *
> > +__alloc_pages_high_priority(gfp_t gfp_mask, unsigned int order,
> > +	struct zonelist *zonelist, enum zone_type high_zoneidx,
> > +	nodemask_t *nodemask)
> > +{
> > +	struct page *page;
> > +
> > +	do {
> > +		page = get_page_from_freelist(gfp_mask, nodemask, order,
> > +			zonelist, high_zoneidx, ALLOC_NO_WATERMARKS);
> > +
> > +		if (!page && gfp_mask & __GFP_NOFAIL)
> > +			congestion_wait(WRITE, HZ/50);
> > +	} while (!page && (gfp_mask & __GFP_NOFAIL));
> > +
> > +	return page;
> > +}
> > +
> > +static inline
> > +void wake_all_kswapd(unsigned int order, struct zonelist *zonelist,
> > +						enum zone_type high_zoneidx)
> > +{
> > +	struct zoneref *z;
> > +	struct zone *zone;
> > +
> > +	for_each_zone_zonelist(zone, z, zonelist, high_zoneidx)
> > +		wakeup_kswapd(zone, order);
> > +}
> > +
> > +static inline struct page *
> > +__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > +	struct zonelist *zonelist, enum zone_type high_zoneidx,
> > +	nodemask_t *nodemask)
> > +{
> > +	const gfp_t wait = gfp_mask & __GFP_WAIT;
> > +	struct page *page = NULL;
> > +	int alloc_flags;
> > +	unsigned long pages_reclaimed = 0;
> > +	unsigned long did_some_progress;
> > +	struct task_struct *p = current;
> >  
> >  	/*
> >  	 * GFP_THISNODE (meaning __GFP_THISNODE, __GFP_NORETRY and
> > @@ -1513,8 +1638,7 @@ restart:
> >  	if (NUMA_BUILD && (gfp_mask & GFP_THISNODE) == GFP_THISNODE)
> >  		goto nopage;
> >  
> > -	for_each_zone_zonelist(zone, z, zonelist, high_zoneidx)
> > -		wakeup_kswapd(zone, order);
> > +	wake_all_kswapd(order, zonelist, high_zoneidx);
> >  
> >  	/*
> >  	 * OK, we're below the kswapd watermark and have kicked background
> > @@ -1534,6 +1658,7 @@ restart:
> >  	if (wait)
> >  		alloc_flags |= ALLOC_CPUSET;
> >  
> > +restart:
> >  	/*
> >  	 * Go through the zonelist again. Let __GFP_HIGH and allocations
> >  	 * coming from realtime tasks go deeper into reserves.
> > @@ -1547,119 +1672,47 @@ restart:
> >  	if (page)
> >  		goto got_pg;
> >  
> > -	/* This allocation should allow future memory freeing. */
> > -
> > -rebalance:
> > -	if (((p->flags & PF_MEMALLOC) || unlikely(test_thread_flag(TIF_MEMDIE)))
> > -			&& !in_interrupt()) {
> > -		if (!(gfp_mask & __GFP_NOMEMALLOC)) {
> > -nofail_alloc:
> > -			/* go through the zonelist yet again, ignoring mins */
> > -			page = get_page_from_freelist(gfp_mask, nodemask, order,
> > -				zonelist, high_zoneidx, ALLOC_NO_WATERMARKS);
> > -			if (page)
> > -				goto got_pg;
> > -			if (gfp_mask & __GFP_NOFAIL) {
> > -				congestion_wait(WRITE, HZ/50);
> > -				goto nofail_alloc;
> > -			}
> > -		}
> > -		goto nopage;
> > -	}
> > +	/* Allocate without watermarks if the context allows */
> > +	if (is_allocation_high_priority(p, gfp_mask))
> > +		page = __alloc_pages_high_priority(gfp_mask, order,
> > +			zonelist, high_zoneidx, nodemask);
> > +	if (page)
> > +		goto got_pg;
> >  
> >  	/* Atomic allocations - we can't balance anything */
> >  	if (!wait)
> >  		goto nopage;
> >  
> 
> old code is below.
> if PF_MEMALLOC and !in_interrupt() and __GFP_NOMEMALLOC case,
> old code jump to nopage, your one call reclaim.
> 
> I think, if the task have PF_MEMALLOC, it shouldn't call reclaim.
> if not, endless reclaim recursion happend.
> 
> --------------------------------------------------------------------
> rebalance:
>         if (((p->flags & PF_MEMALLOC) || unlikely(test_thread_flag(TIF_MEMDIE)))
>                         && !in_interrupt()) {
>                 if (!(gfp_mask & __GFP_NOMEMALLOC)) {
> nofail_alloc:
>                         /* go through the zonelist yet again, ignoring mins */
>                         page = get_page_from_freelist(gfp_mask, nodemask, order,
>                                 zonelist, high_zoneidx, ALLOC_NO_WATERMARKS);
>                         if (page)
>                                 goto got_pg;
>                         if (gfp_mask & __GFP_NOFAIL) {
>                                 congestion_wait(WRITE, HZ/50);
>                                 goto nofail_alloc;
>                         }
>                 }
>                 goto nopage;
>         }
> --------------------------------------------------------------------
> 

I altered the modified version to look like

static inline int
is_allocation_high_priority(struct task_struct *p, gfp_t gfp_mask)
{
        if (((p->flags & PF_MEMALLOC) || unlikely(test_thread_flag(TIF_MEMDIE)))
                        && !in_interrupt())
                return 1;
        return 0;
}

Note the check to __GFP_NOMEMALLOC is no longer there.

....

        /* Allocate without watermarks if the context allows */
        if (is_allocation_high_priority(p, gfp_mask)) {
		/* Do not dip into emergency reserves if specified */
                if (!(gfp_mask & __GFP_NOMEMALLOC)) {
                        page = __alloc_pages_high_priority(gfp_mask, order,
                                zonelist, high_zoneidx, nodemask);
                        if (page)
                                goto got_pg;
                }

		/* Ensure no recursion into the allocator */
                goto nopage;
        }


Is that better?

> >  
> >  	/* Atomic allocations - we can't balance anything */
> >  	if (!wait)
> >  		goto nopage;
> >  
> > -	cond_resched();
> > -
> > -	/* We now go into synchronous reclaim */
> > -	cpuset_memory_pressure_bump();
> > -
> > -	p->flags |= PF_MEMALLOC;
> > -
> > -	lockdep_set_current_reclaim_state(gfp_mask);
> > -	reclaim_state.reclaimed_slab = 0;
> > -	p->reclaim_state = &reclaim_state;
> > -
> > -	did_some_progress = try_to_free_pages(zonelist, order,
> > -						gfp_mask, nodemask);
> > -
> > -	p->reclaim_state = NULL;
> > -	lockdep_clear_current_reclaim_state();
> > -	p->flags &= ~PF_MEMALLOC;
> > -
> > -	cond_resched();
> > +	/* Try direct reclaim and then allocating */
> > +	page = __alloc_pages_direct_reclaim(gfp_mask, order,
> > +					zonelist, high_zoneidx,
> > +					nodemask,
> > +					alloc_flags, &did_some_progress);
> > +	if (page)
> > +		goto got_pg;
> >  
> > -	if (order != 0)
> > -		drain_all_pages();
> > +	/*
> > +	 * If we failed to make any progress reclaiming, then we are
> > +	 * running out of options and have to consider going OOM
> > +	 */
> > +	if (!did_some_progress) {
> > +		if ((gfp_mask & __GFP_FS) && !(gfp_mask & __GFP_NORETRY)) {
> > +			page = __alloc_pages_may_oom(gfp_mask, order,
> > +					zonelist, high_zoneidx,
> > +					nodemask);
> > +			if (page)
> > +				goto got_pg;
> 
> the old code here.
> 
> ------------------------------------------------------------------------
>         } else if ((gfp_mask & __GFP_FS) && !(gfp_mask & __GFP_NORETRY)) {
>                 if (!try_set_zone_oom(zonelist, gfp_mask)) {
>                         schedule_timeout_uninterruptible(1);
>                         goto restart;
>                 }
> 
>                 /*
>                  * Go through the zonelist yet one more time, keep
>                  * very high watermark here, this is only to catch
>                  * a parallel oom killing, we must fail if we're still
>                  * under heavy pressure.
>                  */
>                 page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask,
>                         order, zonelist, high_zoneidx,
>                         ALLOC_WMARK_HIGH|ALLOC_CPUSET);
>                 if (page) {
>                         clear_zonelist_oom(zonelist, gfp_mask);
>                         goto got_pg;
>                 }
> 
>                 /* The OOM killer will not help higher order allocs so fail */
>                 if (order > PAGE_ALLOC_COSTLY_ORDER) {
>                         clear_zonelist_oom(zonelist, gfp_mask);
>                         goto nopage;
>                 }
> 
>                 out_of_memory(zonelist, gfp_mask, order);
>                 clear_zonelist_oom(zonelist, gfp_mask);
>                 goto restart;
>         }
> ------------------------------------------------------------------------
> 
> if get_page_from_freelist() return NULL and order > PAGE_ALLOC_COSTLY_ORDER,
> old code jump to nopage, your one jump to restart.
> 

Good spot. The new section now looks like

                        page = __alloc_pages_may_oom(gfp_mask, order,
                                        zonelist, high_zoneidx,
                                        nodemask);
                        if (page)
                                goto got_pg;

                        /*
                         * The OOM killer does not trigger for high-order allocations
                         * but if no progress is being made, there are no other
                         * options and retrying is unlikely to help
                         */
                        if (order > PAGE_ALLOC_COSTLY_ORDER)
                                goto nopage;

Better?

> 
> 
> 
> > -	if (likely(did_some_progress)) {
> > -		page = get_page_from_freelist(gfp_mask, nodemask, order,
> > -					zonelist, high_zoneidx, alloc_flags);
> > -		if (page)
> > -			goto got_pg;
> > -	} else if ((gfp_mask & __GFP_FS) && !(gfp_mask & __GFP_NORETRY)) {
> > -		if (!try_set_zone_oom(zonelist, gfp_mask)) {
> > -			schedule_timeout_uninterruptible(1);
> >  			goto restart;
> >  		}
> > -
> > -		/*
> > -		 * Go through the zonelist yet one more time, keep
> > -		 * very high watermark here, this is only to catch
> > -		 * a parallel oom killing, we must fail if we're still
> > -		 * under heavy pressure.
> > -		 */
> > -		page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask,
> > -			order, zonelist, high_zoneidx,
> > -			ALLOC_WMARK_HIGH|ALLOC_CPUSET);
> > -		if (page) {
> > -			clear_zonelist_oom(zonelist, gfp_mask);
> > -			goto got_pg;
> > -		}
> > -
> > -		/* The OOM killer will not help higher order allocs so fail */
> > -		if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > -			clear_zonelist_oom(zonelist, gfp_mask);
> > -			goto nopage;
> > -		}
> > -
> > -		out_of_memory(zonelist, gfp_mask, order);
> > -		clear_zonelist_oom(zonelist, gfp_mask);
> > -		goto restart;
> >  	}
> >  
> > -	/*
> > -	 * Don't let big-order allocations loop unless the caller explicitly
> > -	 * requests that.  Wait for some write requests to complete then retry.
> > -	 *
> > -	 * In this implementation, order <= PAGE_ALLOC_COSTLY_ORDER
> > -	 * means __GFP_NOFAIL, but that may not be true in other
> > -	 * implementations.
> > -	 *
> > -	 * For order > PAGE_ALLOC_COSTLY_ORDER, if __GFP_REPEAT is
> > -	 * specified, then we retry until we no longer reclaim any pages
> > -	 * (above), or we've reclaimed an order of pages at least as
> > -	 * large as the allocation's order. In both cases, if the
> > -	 * allocation still fails, we stop retrying.
> > -	 */
> > +	/* Check if we should retry the allocation */
> >  	pages_reclaimed += did_some_progress;
> > -	do_retry = 0;
> > -	if (!(gfp_mask & __GFP_NORETRY)) {
> > -		if (order <= PAGE_ALLOC_COSTLY_ORDER) {
> > -			do_retry = 1;
> > -		} else {
> > -			if (gfp_mask & __GFP_REPEAT &&
> > -				pages_reclaimed < (1 << order))
> > -					do_retry = 1;
> > -		}
> > -		if (gfp_mask & __GFP_NOFAIL)
> > -			do_retry = 1;
> > -	}
> > -	if (do_retry) {
> > +	if (should_alloc_retry(gfp_mask, order, pages_reclaimed)) {
> > +		/* Wait for some write requests to complete then retry */
> >  		congestion_wait(WRITE, HZ/50);
> > -		goto rebalance;
> > +		goto restart;
> 
> this change rebalance to restart.
> 

True, it's makes more sense to me sensible to goto restart at that point
after waiting on IO to complete but it's a functional change and doesn't
belong in this patch. I've fixed it up.

Very well spotted.

> 
> >  	}
> >  
> >  nopage:
> > @@ -1672,6 +1725,41 @@ nopage:
> >  	}
> >  got_pg:
> >  	return page;
> > +
> > +}
> > +
> > +/*
> > + * This is the 'heart' of the zoned buddy allocator.
> > + */
> > +struct page *
> > +__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
> > +			struct zonelist *zonelist, nodemask_t *nodemask)
> > +{
> > +	enum zone_type high_zoneidx = gfp_zone(gfp_mask);
> > +	struct page *page;
> > +
> > +	lockdep_trace_alloc(gfp_mask);
> > +
> > +	might_sleep_if(gfp_mask & __GFP_WAIT);
> > +
> > +	if (should_fail_alloc_page(gfp_mask, order))
> > +		return NULL;
> > +
> > +	/*
> > +	 * Check the zones suitable for the gfp_mask contain at least one
> > +	 * valid zone. It's possible to have an empty zonelist as a result
> > +	 * of GFP_THISNODE and a memoryless node
> > +	 */
> > +	if (unlikely(!zonelist->_zonerefs->zone))
> > +		return NULL;
> > +
> > +	page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask, order,
> > +			zonelist, high_zoneidx, ALLOC_WMARK_LOW|ALLOC_CPUSET);
> > +	if (unlikely(!page))
> > +		page = __alloc_pages_slowpath(gfp_mask, order,
> > +				zonelist, high_zoneidx, nodemask);
> > +
> > +	return page;
> >  }
> >  EXPORT_SYMBOL(__alloc_pages_nodemask);
> >  
> > -- 
> > 1.5.6.5
> > 
> 
> 
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ