[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151208065116.GA6902@js1304-P5Q-DELUXE>
Date: Tue, 8 Dec 2015 15:51:16 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Aaron Lu <aaron.lu@...el.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Rik van Riel <riel@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>
Subject: Re: [RFC 0/3] reduce latency of direct async compaction
On Tue, Dec 08, 2015 at 01:14:39PM +0800, Aaron Lu wrote:
> On Tue, Dec 08, 2015 at 09:41:18AM +0900, Joonsoo Kim wrote:
> > On Mon, Dec 07, 2015 at 04:59:56PM +0800, Aaron Lu wrote:
> > > On Mon, Dec 07, 2015 at 04:35:24PM +0900, Joonsoo Kim wrote:
> > > > It looks like overhead still remain. I guess that migration scanner
> > > > would call pageblock_pfn_to_page() for more extended range so
> > > > overhead still remain.
> > > >
> > > > I have an idea to solve his problem. Aaron, could you test following patch
> > > > on top of base? It tries to skip calling pageblock_pfn_to_page()
> > >
> > > It doesn't apply on top of 25364a9e54fb8296837061bf684b76d20eec01fb
> > > cleanly, so I made some changes to make it apply and the result is:
> > > https://github.com/aaronlu/linux/commit/cb8d05829190b806ad3948ff9b9e08c8ba1daf63
> >
> > Yes, that's okay. I made it on my working branch but it will not result in
> > any problem except applying.
> >
> > >
> > > There is a problem occured right after the test starts:
> > > [ 58.080962] BUG: unable to handle kernel paging request at ffffea0082000018
> > > [ 58.089124] IP: [<ffffffff81193f29>] compaction_alloc+0xf9/0x270
> > > [ 58.096109] PGD 107ffd6067 PUD 207f7d5067 PMD 0
> > > [ 58.101569] Oops: 0000 [#1] SMP
> >
> > I did some mistake. Please test following patch. It is also made
> > on my working branch so you need to resolve conflict but it would be
> > trivial.
> >
> > I inserted some logs to check whether zone is contiguous or not.
> > Please check that normal zone is set to contiguous after testing.
>
> Yes it is contiguous, but unfortunately, the problem remains:
> [ 56.536930] check_zone_contiguous: Normal
> [ 56.543467] check_zone_contiguous: Normal: contiguous
> [ 56.549640] BUG: unable to handle kernel paging request at ffffea0082000018
> [ 56.557717] IP: [<ffffffff81193f29>] compaction_alloc+0xf9/0x270
> [ 56.564719] PGD 107ffd6067 PUD 207f7d5067 PMD 0
>
Maybe, I find the reason. cc->free_pfn can be initialized to invalid pfn
that isn't checked so optimized pageblock_pfn_to_page() causes BUG().
I add work-around for this problem at isolate_freepages(). Please test
following one.
Thanks.
---------->8---------------
>From 7e954a68fb555a868acc5860627a1ad8dadbe3bf Mon Sep 17 00:00:00 2001
From: Joonsoo Kim <iamjoonsoo.kim@....com>
Date: Mon, 7 Dec 2015 14:51:42 +0900
Subject: [PATCH] mm/compaction: Optimize pageblock_pfn_to_page() for
contiguous zone
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
---
include/linux/mmzone.h | 1 +
mm/compaction.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 60 insertions(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index e23a9e7..573f9a9 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -521,6 +521,7 @@ struct zone {
#endif
#if defined CONFIG_COMPACTION || defined CONFIG_CMA
+ int contiguous;
/* Set to true when the PG_migrate_skip bits should be cleared */
bool compact_blockskip_flush;
#endif
diff --git a/mm/compaction.c b/mm/compaction.c
index de3e1e7..ff5fb04 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -88,7 +88,7 @@ static inline bool migrate_async_suitable(int migratetype)
* the first and last page of a pageblock and avoid checking each individual
* page in a pageblock.
*/
-static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
+static struct page *__pageblock_pfn_to_page(unsigned long start_pfn,
unsigned long end_pfn, struct zone *zone)
{
struct page *start_page;
@@ -114,6 +114,56 @@ static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
return start_page;
}
+static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn,
+ unsigned long end_pfn, struct zone *zone)
+{
+ if (zone->contiguous == 1)
+ return pfn_to_page(start_pfn);
+
+ return __pageblock_pfn_to_page(start_pfn, end_pfn, zone);
+}
+
+static void check_zone_contiguous(struct zone *zone)
+{
+ unsigned long block_start_pfn = zone->zone_start_pfn;
+ unsigned long block_end_pfn;
+ unsigned long pfn;
+
+ /* Already checked */
+ if (zone->contiguous)
+ return;
+
+ printk("%s: %s\n", __func__, zone->name);
+ block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages);
+ for (; block_start_pfn < zone_end_pfn(zone);
+ block_start_pfn = block_end_pfn,
+ block_end_pfn += pageblock_nr_pages) {
+
+ block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
+
+ if (!__pageblock_pfn_to_page(block_start_pfn,
+ block_end_pfn, zone)) {
+ /* We have hole */
+ zone->contiguous = -1;
+ printk("%s: %s: uncontiguous\n", __func__, zone->name);
+ return;
+ }
+
+ /* Check validity of pfn within pageblock */
+ for (pfn = block_start_pfn; pfn < block_end_pfn; pfn++) {
+ if (!pfn_valid_within(pfn)) {
+ zone->contiguous = -1;
+ printk("%s: %s: uncontiguous\n", __func__, zone->name);
+ return;
+ }
+ }
+ }
+
+ /* We don't have hole */
+ zone->contiguous = 1;
+ printk("%s: %s: contiguous\n", __func__, zone->name);
+}
+
#ifdef CONFIG_COMPACTION
/* Do not skip compaction more than 64 times */
@@ -948,6 +998,12 @@ static void isolate_freepages(struct compact_control *cc)
unsigned long low_pfn; /* lowest pfn scanner is able to scan */
struct list_head *freelist = &cc->freepages;
+ /* Work-around */
+ if (zone->contiguous == 1 &&
+ cc->free_pfn == zone_end_pfn(zone) &&
+ cc->free_pfn == cc->free_pfn & ~(pageblock_nr_pages-1))
+ cc->free_pfn -= pageblock_nr_pages;
+
/*
* Initialise the free scanner. The starting point is where we last
* successfully isolated from, zone-cached value, or the end of the
@@ -1356,6 +1412,8 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
;
}
+ check_zone_contiguous(zone);
+
/*
* Clear pageblock skip if there were failures recently and compaction
* is about to be retried after being deferred. kswapd does not do
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists