[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190117155117.GI27437@techsingularity.net>
Date: Thu, 17 Jan 2019 15:51:17 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Linux-MM <linux-mm@...ck.org>,
David Rientjes <rientjes@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>, ying.huang@...el.com,
kirill@...temov.name, Andrew Morton <akpm@...ux-foundation.org>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 13/25] mm, compaction: Use free lists to quickly locate a
migration target
On Thu, Jan 17, 2019 at 03:36:08PM +0100, Vlastimil Babka wrote:
> > /* Reorder the free list to reduce repeated future searches */
> > static void
> > -move_freelist_tail(struct list_head *freelist, struct page *freepage)
> > +move_freelist_head(struct list_head *freelist, struct page *freepage)
> > {
> > LIST_HEAD(sublist);
> >
> > @@ -1147,6 +1147,193 @@ move_freelist_tail(struct list_head *freelist, struct page *freepage)
> > }
> > }
>
> Hmm this hunk appears to simply rename move_freelist_tail() to
> move_freelist_head(), but fast_find_migrateblock() is unchanged, so it now calls
> the new version below.
>
Rebase screwup. I'll fix it up and retest
> <SNIP>
> BTW it would be nice to
> document both of the functions what they are doing on the high level :) The one
> above was a bit tricky to decode to me, as it seems to be moving the initial
> part of list to the tail, to effectively move the latter part of the list
> (including freepage) to the head.
>
I'll include a blurb.
> > + /*
> > + * If starting the scan, use a deeper search and use the highest
> > + * PFN found if a suitable one is not found.
> > + */
> > + if (cc->free_pfn == pageblock_start_pfn(zone_end_pfn(cc->zone) - 1)) {
> > + limit = pageblock_nr_pages >> 1;
> > + scan_start = true;
> > + }
> > +
> > + /*
> > + * Preferred point is in the top quarter of the scan space but take
> > + * a pfn from the top half if the search is problematic.
> > + */
> > + distance = (cc->free_pfn - cc->migrate_pfn);
> > + low_pfn = pageblock_start_pfn(cc->free_pfn - (distance >> 2));
> > + min_pfn = pageblock_start_pfn(cc->free_pfn - (distance >> 1));
> > +
> > + if (WARN_ON_ONCE(min_pfn > low_pfn))
> > + low_pfn = min_pfn;
> > +
> > + for (order = cc->order - 1;
> > + order >= 0 && !page;
> > + order--) {
> > + struct free_area *area = &cc->zone->free_area[order];
> > + struct list_head *freelist;
> > + struct page *freepage;
> > + unsigned long flags;
> > +
> > + if (!area->nr_free)
> > + continue;
> > +
> > + spin_lock_irqsave(&cc->zone->lock, flags);
> > + freelist = &area->free_list[MIGRATE_MOVABLE];
> > + list_for_each_entry_reverse(freepage, freelist, lru) {
> > + unsigned long pfn;
> > +
> > + order_scanned++;
> > + nr_scanned++;
>
> Seems order_scanned is supposed to be reset to 0 for each new order? Otherwise
> it's equivalent to nr_scanned...
>
Yes, it was meant to be. Not sure at what point I broke that and failed
to spot it afterwards. As you note elsewhere, the code structure doesn't
make sense if it wasn't been set to 0. Instead of doing a shorter search
at each order, it would simply check one page for each lower order.
Thanks!
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists