[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BCDBCC4.60401@redhat.com>
Date: Tue, 20 Apr 2010 10:40:04 -0400
From: Rik van Riel <riel@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>
CC: Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>,
Mel Gorman <mel@....ul.ie>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
Nick Piggin <npiggin@...e.de>,
Chris Mason <chris.mason@...cle.com>,
Jens Axboe <jens.axboe@...cle.com>,
linux-kernel@...r.kernel.org, gregkh@...ell.com,
Corrado Zoccolo <czoccolo@...il.com>
Subject: Re: [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure
On 04/19/2010 05:44 PM, Johannes Weiner wrote:
> What do people think?
It has potential advantages and disadvantages.
On smaller desktop systems, it is entirely possible that
the working set is close to half of the page cache. Your
patch reduces the amount of memory that is protected on
the active file list, so it may cause part of the working
set to get evicted.
On the other hand, having a smaller active list frees up
more memory for sequential (streaming, use-once) disk IO.
This can be useful on systems with large IO subsystems
and small memory (like Christian's s390 virtual machine,
with 256MB RAM and 4 disks!).
I wonder if we could not find some automatic way to
balance between these two situations, for example by
excluding currently-in-flight pages from the calculations.
In Christian's case, he could have 160MB of cache (buffer
+ page cache), of which 70MB is in flight to disk at a
time. It may be worthwhile to exclude that 70MB from the
total and aim for 45MB active file and 45MB inactive file
pages on his system. That way IO does not get starved.
On a desktop system, which needs the working set protected
and does less IO, we will automatically protect more of
the working set - since there is no IO to starve.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists