[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <509025ED.8050207@redhat.com>
Date: Tue, 30 Oct 2012 15:09:33 -0400
From: Rik van Riel <riel@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
klamm@...dex-team.ru, mgorman@...e.de, hannes@...xchg.org
Subject: Re: [PATCH RFC] mm,vmscan: only evict file pages when we have plenty
On 10/30/2012 02:54 PM, Andrew Morton wrote:
> On Tue, 30 Oct 2012 14:42:04 -0400
> Rik van Riel <riel@...hat.com> wrote:
>
>> If we have more inactive file pages than active file pages, we
>> skip scanning the active file pages alltogether, with the idea
>> that we do not want to evict the working set when there is
>> plenty of streaming IO in the cache.
>
> Yes, I've never liked that. The "(active > inactive)" thing is a magic
> number. And suddenly causing a complete cessation of vm scanning at a
> particular magic threshold seems rather crude, compared to some complex
> graduated thing which will also always do the wrong thing, only more
> obscurely ;)
>
> Ho hum, in the absence of observed problems, I guess we don't muck with
> it.
The thing is, when we "suddenly switch behaviour" back to
scanning all the lists, that does not have to suddenly
lead to pages from the other lists being actually evicted.
Instead, it will lead to referenced inactive_anon pages
being moved back to the active_anon list, and any pages
from the end of the active_file list being moved to the
inactive_file list.
There is a threshold, and Johannes has patches to set
the threshold in a much more intelligent way, but the
change in behaviour should not be sudden due to the
inactive lists providing a rather large buffer.
When the VM is bouncing around the threshold, it should
look like a reduction in the rate at which the other
lists are scanned.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists