[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <E1LVFiv-00032p-HX@cmpxchg.org>
Date: Fri, 06 Feb 2009 02:41:41 +0100
From: hannes@...xchg.org
To: unlisted-recipients:; (no To-header on input)
>From hannes@...xchg.org Fri Feb 6 02:28:54 2009
Message-Id: <20090206012605.378214179@...xchg.org>
User-Agent: quilt/0.47-1
Date: Fri, 06 Feb 2009 02:26:05 +0100
From: Johannes Weiner <hannes@...xchg.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>,
Rik van Riel <riel@...hat.com>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH 0/3] swsusp: shrink file cache first
Hello!
here are three patches that adjust the memory shrinking code used for
suspend-to-disk.
The first two patches are cleanups only and can probably go in
regardless of the third one.
The third patch changes the shrink_all_memory() logic to drop the file
cache first before touching any mapped files and only then goes for
anon pages.
The reason is that everything not shrunk before suspension has to go
into the image and will be 'prefaulted' before the processes can
resume and the system is usable again, so the image should be small
and contain only pages that are likely to be used right after resume
again. And this in turn means that the inactive file cache is the
best point to start decimating used memory.
Also, right now, subsequent faults of contiguously mapped files are
likely to perform better than swapin (see
http://kernelnewbies.org/KernelProjects/SwapoutClustering), so not
only file cache is preferred over other pages, but file pages over
anon pages in general.
Testing up to this point shows that the patch does what is intended,
shrinking file cache in favor of anon pages. But whether the idea is
correct to begin with is a bit hard to quantify and I am still working
on it, so RFC only.
Hannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists