[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120912235855.GB2766@bbox>
Date: Thu, 13 Sep 2012 08:58:55 +0900
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Kyungmin Park <kmpark@...radead.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Michal Nazarewicz <mina86@...a86.com>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH] mm: cma: Discard clean pages during contiguous
allocation instead of migration
On Wed, Sep 12, 2012 at 01:07:32PM -0700, Andrew Morton wrote:
> On Tue, 11 Sep 2012 09:41:52 +0900
> Minchan Kim <minchan@...nel.org> wrote:
>
> > This patch drops clean cache pages instead of migration during
> > alloc_contig_range() to minimise allocation latency by reducing the amount
> > of migration is necessary. It's useful for CMA because latency of migration
> > is more important than evicting the background processes working set.
> > In addition, as pages are reclaimed then fewer free pages for migration
> > targets are required so it avoids memory reclaiming to get free pages,
> > which is a contributory factor to increased latency.
> >
> > * from v1
> > * drop migrate_mode_t
> > * add reclaim_clean_pages_from_list instad of MIGRATE_DISCARD support - Mel
> >
> > I measured elapsed time of __alloc_contig_migrate_range which migrates
> > 10M in 40M movable zone in QEMU machine.
> >
> > Before - 146ms, After - 7ms
> >
> > ...
> >
> > @@ -758,7 +760,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> > wait_on_page_writeback(page);
> > }
> >
> > - references = page_check_references(page, sc);
> > + if (!force_reclaim)
> > + references = page_check_references(page, sc);
>
> grumble. Could we please document `enum page_references' and
> page_check_references()?
>
> And the `force_reclaim' arg could do with some documentation. It only
> forces reclaim under certain circumstances. They should be described,
> and a reson should be provided.
I will give it a shot by another patch.
>
> Why didn't this patch use PAGEREF_RECLAIM_CLEAN? It is possible for
> someone to dirty one of these pages after we tested its cleanness and
> we'll then go off and write it out, but we won't be reclaiming it?
Absolutely.
Thanks Andrew!
Here it goes.
====== 8< ======
>From 90022feb9ecf8e9a4efba7cbf49d7cead777020f Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@...nel.org>
Date: Thu, 13 Sep 2012 08:45:58 +0900
Subject: [PATCH] mm: cma: reclaim only clean pages
It is possible for pages to be dirty after the check
in reclaim_clean_pages_from_list so that it ends up
paging out the pages, which is never what we want for speed up.
This patch fixes it.
Cc: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Michal Nazarewicz <mina86@...a86.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Mel Gorman <mgorman@...e.de>
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f8f56f8..1ee4b69 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -694,7 +694,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
struct address_space *mapping;
struct page *page;
int may_enter_fs;
- enum page_references references = PAGEREF_RECLAIM;
+ enum page_references references = PAGEREF_RECLAIM_CLEAN;
cond_resched();
--
1.7.9.5
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists