[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1334266210.2573.8.camel@shrek.rexursive.com>
Date: Fri, 13 Apr 2012 07:30:10 +1000
From: Bojan Smojver <bojan@...ursive.com>
To: Per Olofsson <pelle@...ian.org>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v10]: Hibernation: fix the number of pages used for
hibernate/thaw buffering
On Thu, 2012-04-12 at 18:30 +0200, Per Olofsson wrote:
> Indeed. I think what you want is:
>
> read_pages = min(low_free_pages(),
> nr_free_pages() - snapshot_get_image_size()) / 2;
I was thinking more like this:
----------------------
unsigned long read_pages = 0;
[...]
if (low_free_pages() > snapshot_get_image_size())
read_pages = (low_free_pages() - snapshot_get_image_size()) / 2;
read_pages = clamp_val(read_pages, LZO_MIN_RD_PAGES, LZO_MAX_RD_PAGES);
----------------------
Where LZO_MIN_RD_PAGES and LZO_MAX_RD_PAGES are set to 1024 and 8192,
respectively (this was picked empirically).
Because we don't really know how many highmem pages are in the image
(this is figured out by prepare_image() function, half way through
reading the image - so way after this calculation is done), we assume
the worst case scenario. And that is that there are no highmem pages in
the image.
Given that we cannot use pages from highmem for buffers anyway, the
above should be careful enough. Of course, there is still some
possibility of running out of pages, but the kernel is usually in a
pretty good shape memory-wise on image load, so we should be able to
squeeze a few MBs out of it, at least.
--
Bojan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists