[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xa1ttwsgi9yk.fsf@mina86.com>
Date: Mon, 03 Aug 2015 19:00:03 +0200
From: Michal Nazarewicz <mina86@...a86.com>
To: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Vlastimil Babka <vbabka@...e.cz>,
Minchan Kim <minchan@...nel.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Christoph Lameter <cl@...ux.com>,
Rik van Riel <riel@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH v2 2/5] mm, compaction: simplify handling restart position in free pages scanner
On Fri, Jul 31 2015, Vlastimil Babka wrote:
> Handling the position where compaction free scanner should restart (stored in
> cc->free_pfn) got more complex with commit e14c720efdd7 ("mm, compaction:
> remember position within pageblock in free pages scanner"). Currently the
> position is updated in each loop iteration of isolate_freepages(), although it
> should be enough to update it only when breaking from the loop. There's also
> an extra check outside the loop updates the position in case we have met the
> migration scanner.
>
> This can be simplified if we move the test for having isolated enough from the
> for-loop header next to the test for contention, and determining the restart
> position only in these cases. We can reuse the isolate_start_pfn variable for
> this instead of setting cc->free_pfn directly. Outside the loop, we can simply
> set cc->free_pfn to current value of isolate_start_pfn without any extra check.
>
> Also add a VM_BUG_ON to catch possible mistake in the future, in case we later
> add a new condition that terminates isolate_freepages_block() prematurely
> without also considering the condition in isolate_freepages().
>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> Cc: Minchan Kim <minchan@...nel.org>
> Acked-by: Mel Gorman <mgorman@...e.de>
> Acked-by: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Michal Nazarewicz <mina86@...a86.com>
Acked-by: Michal Nazarewicz <mina86@...a86.com>
> Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Rik van Riel <riel@...hat.com>
> Cc: David Rientjes <rientjes@...gle.com>
> ---
> mm/compaction.c | 35 ++++++++++++++++++++---------------
> 1 file changed, 20 insertions(+), 15 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index d46aaeb..7e0a814 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -947,8 +947,7 @@ static void isolate_freepages(struct compact_control *cc)
> * pages on cc->migratepages. We stop searching if the migrate
> * and free page scanners meet or enough free pages are isolated.
> */
> - for (; block_start_pfn >= low_pfn &&
> - cc->nr_migratepages > cc->nr_freepages;
> + for (; block_start_pfn >= low_pfn;
> block_end_pfn = block_start_pfn,
> block_start_pfn -= pageblock_nr_pages,
> isolate_start_pfn = block_start_pfn) {
> @@ -980,6 +979,8 @@ static void isolate_freepages(struct compact_control *cc)
> block_end_pfn, freelist, false);
>
> /*
> + * If we isolated enough freepages, or aborted due to async
> + * compaction being contended, terminate the loop.
> * Remember where the free scanner should restart next time,
> * which is where isolate_freepages_block() left off.
> * But if it scanned the whole pageblock, isolate_start_pfn
> @@ -988,27 +989,31 @@ static void isolate_freepages(struct compact_control *cc)
> * In that case we will however want to restart at the start
> * of the previous pageblock.
> */
> - cc->free_pfn = (isolate_start_pfn < block_end_pfn) ?
> - isolate_start_pfn :
> - block_start_pfn - pageblock_nr_pages;
> -
> - /*
> - * isolate_freepages_block() might have aborted due to async
> - * compaction being contended
> - */
> - if (cc->contended)
> + if ((cc->nr_freepages >= cc->nr_migratepages)
> + || cc->contended) {
> + if (isolate_start_pfn >= block_end_pfn)
> + isolate_start_pfn =
> + block_start_pfn - pageblock_nr_pages;
> break;
> + } else {
> + /*
> + * isolate_freepages_block() should not terminate
> + * prematurely unless contended, or isolated enough
> + */
> + VM_BUG_ON(isolate_start_pfn < block_end_pfn);
> + }
> }
>
> /* split_free_page does not map the pages */
> map_pages(freelist);
>
> /*
> - * If we crossed the migrate scanner, we want to keep it that way
> - * so that compact_finished() may detect this
> + * Record where the free scanner will restart next time. Either we
> + * broke from the loop and set isolate_start_pfn based on the last
> + * call to isolate_freepages_block(), or we met the migration scanner
> + * and the loop terminated due to isolate_start_pfn < low_pfn
> */
> - if (block_start_pfn < low_pfn)
> - cc->free_pfn = cc->migrate_pfn;
> + cc->free_pfn = isolate_start_pfn;
> }
>
> /*
> --
> 2.4.6
>
--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<mpn@...gle.com>--<xmpp:mina86@...ber.org>--ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists