[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <507380F8.4000401@linaro.org>
Date: Mon, 08 Oct 2012 18:42:16 -0700
From: John Stultz <john.stultz@...aro.org>
To: Mel Gorman <mgorman@...e.de>
CC: Anton Vorontsov <anton.vorontsov@...aro.org>,
Pekka Enberg <penberg@...nel.org>,
Leonid Moiseichuk <leonid.moiseichuk@...ia.com>,
KOSAKI Motohiro <kosaki.motohiro@...il.com>,
Minchan Kim <minchan@...nel.org>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Colin Cross <ccross@...roid.com>,
Arve Hj?nnev?g <arve@...roid.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linaro-kernel@...ts.linaro.org,
patches@...aro.org, kernel-team@...roid.com
Subject: Re: [RFC] vmevent: Implement pressure attribute
On 10/08/2012 02:46 AM, Mel Gorman wrote:
> On Sun, Oct 07, 2012 at 01:14:17AM -0700, Anton Vorontsov wrote:
>> And here we just try to let userland to assist, userland can tell "oh,
>> don't bother with swapping or draining caches, I can just free some
>> memory".
>>
>> Quite interesting, this also very much resembles volatile mmap ranges
>> (i.e. the work that John Stultz is leading in parallel).
>>
> Agreed. I haven't been paying close attention to those patches but it
> seems to me that one possiblity is that a listener for a vmevent would
> set volatile ranges in response.
I don't have too much to comment on the rest of this mail, but just
wanted to pipe in here, as the volatile ranges have caused some confusion.
While your suggestion would be possible, with volatile ranges, I've been
promoting a more hands off-approach from the application perspective,
where the application always would mark data that could be regenerated
as volatile, unmarking it when accessing it.
This way the application doesn't need to be responsive to memory
pressure, the kernel just takes what it needs from what the application
made available.
Only when the application needs the data again, would it mark it
non-volatile (or alternatively with the new SIGBUS semantics, access the
purged volatile data and catch a SIGBUS), find the data was purged and
regenerate it.
That said, hybrid approaches like you suggested would be possible, but
at a certain point, if we're waiting for a notification to take action,
it might be better just to directly free that memory, rather then just
setting it as volatile, and leaving it to the kernel then reclaim it for
you.
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists