lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Apr 2023 10:39:12 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     linux-mm@...ck.org, Kaiyang Zhao <kaiyang2@...cmu.edu>,
        Vlastimil Babka <vbabka@...e.cz>,
        David Rientjes <rientjes@...gle.com>,
        linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [RFC PATCH 08/26] mm: page_alloc: claim blocks during compaction
 capturing

On Fri, Apr 21, 2023 at 02:12:27PM +0100, Mel Gorman wrote:
> On Tue, Apr 18, 2023 at 03:12:55PM -0400, Johannes Weiner wrote:
> > When capturing a whole block, update the migratetype accordingly. For
> > example, a THP allocation might capture an unmovable block. If the THP
> > gets split and partially freed later, the remainder should group up
> > with movable allocations.
> > 
> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> > ---
> >  mm/internal.h   |  1 +
> >  mm/page_alloc.c | 42 ++++++++++++++++++++++++------------------
> >  2 files changed, 25 insertions(+), 18 deletions(-)
> > 
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 024affd4e4b5..39f65a463631 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -432,6 +432,7 @@ struct compact_control {
> >   */
> >  struct capture_control {
> >  	struct compact_control *cc;
> > +	int migratetype;
> >  	struct page *page;
> >  };
> >  
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 4d20513c83be..8e5996f8b4b4 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -615,6 +615,17 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
> >  				page_to_pfn(page), MIGRATETYPE_MASK);
> >  }
> >  
> > +static void change_pageblock_range(struct page *pageblock_page,
> > +					int start_order, int migratetype)
> > +{
> > +	int nr_pageblocks = 1 << (start_order - pageblock_order);
> > +
> > +	while (nr_pageblocks--) {
> > +		set_pageblock_migratetype(pageblock_page, migratetype);
> > +		pageblock_page += pageblock_nr_pages;
> > +	}
> > +}
> > +
> >  #ifdef CONFIG_DEBUG_VM
> >  static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
> >  {
> > @@ -962,14 +973,19 @@ compaction_capture(struct capture_control *capc, struct page *page,
> >  	    is_migrate_isolate(migratetype))
> >  		return false;
> >  
> > -	/*
> > -	 * Do not let lower order allocations pollute a movable pageblock.
> > -	 * This might let an unmovable request use a reclaimable pageblock
> > -	 * and vice-versa but no more than normal fallback logic which can
> > -	 * have trouble finding a high-order free page.
> > -	 */
> > -	if (order < pageblock_order && migratetype == MIGRATE_MOVABLE)
> > +	if (order >= pageblock_order) {
> > +		migratetype = capc->migratetype;
> > +		change_pageblock_range(page, order, migratetype);
> > +	} else if (migratetype == MIGRATE_MOVABLE) {
> > +		/*
> > +		 * Do not let lower order allocations pollute a
> > +		 * movable pageblock.  This might let an unmovable
> > +		 * request use a reclaimable pageblock and vice-versa
> > +		 * but no more than normal fallback logic which can
> > +		 * have trouble finding a high-order free page.
> > +		 */
> >  		return false;
> > +	}
> >  
> 
> For capturing pageblock order or larger, why not unconditionally make
> the block MOVABLE? Even if it's a zero page allocation, it would be nice
> to keep the pageblock for movable pages after the split as long as
> possible.

The zero page isn't split, but if some other unmovable allocation does
a split and free later on I want to avoid filling a block with an
unmovable allocation with movables. That block is already lost to
compaction, and this way future unmovable allocations are more likely
to group into that block rather than claim an additional unmovable.

I had to double take for block merges beyond pageblock order,
wondering if we can claim multiple blocks for requests (capc->order)
smaller than a block. But that can't happen. Once we reach
pageblock_order during merging we claim, capture and exit. That means
order > pageblock_order can only happen if capc->order is actually
larger than a pageblock as well. I'll add a comment.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ