lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 2 Feb 2015 16:02:49 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	Vlastimil Babka <vbabka@...e.cz>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>,
	David Rientjes <rientjes@...gle.com>,
	Rik van Riel <riel@...hat.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/4] mm/page_alloc: separate steal decision from steal
 behaviour part

On Fri, Jan 30, 2015 at 03:27:50PM +0100, Vlastimil Babka wrote:
> On 01/30/2015 01:34 PM, Joonsoo Kim wrote:
> > From: Joonsoo <iamjoonsoo.kim@....com>
> > 
> > This is preparation step to use page allocator's anti fragmentation logic
> > in compaction. This patch just separates steal decision part from actual
> > steal behaviour part so there is no functional change.
> > 
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
> > ---
> >  mm/page_alloc.c | 49 ++++++++++++++++++++++++++++++++-----------------
> >  1 file changed, 32 insertions(+), 17 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 8d52ab1..ef74750 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1122,6 +1122,24 @@ static void change_pageblock_range(struct page *pageblock_page,
> >  	}
> >  }
> >  
> > +static bool can_steal_freepages(unsigned int order,
> > +				int start_mt, int fallback_mt)
> > +{
> > +	if (is_migrate_cma(fallback_mt))
> > +		return false;
> > +
> > +	if (order >= pageblock_order)
> > +		return true;
> > +
> > +	if (order >= pageblock_order / 2 ||
> > +		start_mt == MIGRATE_RECLAIMABLE ||
> > +		start_mt == MIGRATE_UNMOVABLE ||
> > +		page_group_by_mobility_disabled)
> > +		return true;
> > +
> > +	return false;
> > +}
> > +
> >  /*
> >   * When we are falling back to another migratetype during allocation, try to
> >   * steal extra free pages from the same pageblocks to satisfy further
> > @@ -1138,9 +1156,10 @@ static void change_pageblock_range(struct page *pageblock_page,
> >   * as well.
> >   */
> >  static void try_to_steal_freepages(struct zone *zone, struct page *page,
> > -				  int start_type, int fallback_type)
> > +				  int start_type)
> 
> It's actually not 'try_to_' anymore, is it? But could be, see below.
> 
> >  {
> >  	int current_order = page_order(page);
> > +	int pages;
> >  
> >  	/* Take ownership for orders >= pageblock_order */
> >  	if (current_order >= pageblock_order) {
> > @@ -1148,19 +1167,12 @@ static void try_to_steal_freepages(struct zone *zone, struct page *page,
> >  		return;
> >  	}
> >  
> > -	if (current_order >= pageblock_order / 2 ||
> > -	    start_type == MIGRATE_RECLAIMABLE ||
> > -	    start_type == MIGRATE_UNMOVABLE ||
> > -	    page_group_by_mobility_disabled) {
> > -		int pages;
> > +	pages = move_freepages_block(zone, page, start_type);
> >  
> > -		pages = move_freepages_block(zone, page, start_type);
> > -
> > -		/* Claim the whole block if over half of it is free */
> > -		if (pages >= (1 << (pageblock_order-1)) ||
> > -				page_group_by_mobility_disabled)
> > -			set_pageblock_migratetype(page, start_type);
> > -	}
> > +	/* Claim the whole block if over half of it is free */
> > +	if (pages >= (1 << (pageblock_order-1)) ||
> > +			page_group_by_mobility_disabled)
> > +		set_pageblock_migratetype(page, start_type);
> >  }
> >  
> >  /* Remove an element from the buddy allocator from the fallback list */
> > @@ -1170,6 +1182,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
> >  	struct free_area *area;
> >  	unsigned int current_order;
> >  	struct page *page;
> > +	bool can_steal;
> >  
> >  	/* Find the largest possible block of pages in the other list */
> >  	for (current_order = MAX_ORDER-1;
> > @@ -1192,10 +1205,11 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
> >  					struct page, lru);
> >  			area->nr_free--;
> >  
> > -			if (!is_migrate_cma(migratetype)) {
> > +			can_steal = can_steal_freepages(current_order,
> > +					start_migratetype, migratetype);
> > +			if (can_steal) {
> 
> can_steal is only used once, why not do if (can_steal_freepages()) directly?
> 
> Or, call can_steal_freepages() from try_to_steal_freepages() and make
> try_to_steal_freepages() return its result. Then here it simplifies to:
> 
> if (!try_to_steal_freepages(...) && is_migrate_cma(...))
> 	buddy_type = migratetype;

You're right. Your commented code loosk better.

Your comment on 3/4 and 4/4 makes me reconsider this code factorization
and I found better solution.
I will send it soon.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ