[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FE37434.808@linaro.org>
Date: Thu, 21 Jun 2012 12:21:24 -0700
From: John Stultz <john.stultz@...aro.org>
To: Minchan Kim <minchan@...nel.org>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Anton Vorontsov <anton.vorontsov@...aro.org>,
Pekka Enberg <penberg@...nel.org>,
Wu Fengguang <fengguang.wu@...el.com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: RFC: Easy-Reclaimable LRU list
On 06/18/2012 10:49 PM, Minchan Kim wrote:
> Hi everybody!
>
> Recently, there are some efforts to handle system memory pressure.
>
> 1) low memory notification - [1]
> 2) fallocate(VOLATILE) - [2]
> 3) fadvise(NOREUSE) - [3]
>
> For them, I would like to add new LRU list, aka "Ereclaimable" which is opposite of "unevictable".
> Reclaimable LRU list includes _easy_ reclaimable pages.
> For example, easy reclaimable pages are following as.
>
> 1. invalidated but remained LRU list.
> 2. pageout pages for reclaim(PG_reclaim pages)
> 3. fadvise(NOREUSE)
> 4. fallocate(VOLATILE)
>
> Their pages shouldn't stir normal LRU list and compaction might not migrate them, even.
> Reclaimer can reclaim Ereclaimable pages before normal lru list and will avoid unnecessary
> swapout in anon pages in easy-reclaimable LRU list.
I was hoping there would be further comment on this by more core VM
devs, but so far things have been quiet (is everyone on vacation?).
Overall this seems reasonable for the volatile ranges functionality.
The one down-side being that dealing with the ranges on a per-page basis
can make marking and unmarking larger ranges as volatile fairly
expensive. In my tests with my last patchset, it was over 75x slower
(~1.5ms) marking and umarking a 1meg range when we deactivate and
activate all of the pages, instead of just inserting the volatile range
into an interval tree and purge via the shrinker (~20us). Granted, my
initial approach is somewhat naive, and some pagevec batching has
improved things three-fold (down to ~500us) , but I'm still ~25x slower
when iterating over all the pages.
There's surely further improvements to be made, but this added cost
worries me, as users are unlikely to generously volunteer up memory to
the kernel as volatile if doing so frequently adds significant overhead.
This makes me wonder if having something like an early-shrinker which
gets called prior to shrinking the lrus might be a better approach for
volatile ranges. It would still be numa-unaware, but would keep the
overhead very light to both volatile users and non users.
Even so, I'd be interested in seeing more about your approach, in the
hopes that it might not be as costly as my initial attempt. Do you have
any plans to start prototyping this?
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists