lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 17 Mar 2012 17:51:53 +1100
From:	Bojan Smojver <bojan@...ursive.com>
To:	"Rafael J. Wysocki" <rjw@...k.pl>
Cc:	linux-kernel@...r.kernel.org
Subject: [PATCH]: Lower the amount of pages used for buffering on hibernation

Hi Rafael,

The patch below reduces the amount of pages that can used by hibernation
code for buffering. Previously, we would take half of what was available
as a starting point, now only a quarter. And, we clamp the value to an
absolute maximum, just to be sure.

Also, the calculation of the maximum number of pages used for buffering
does not depend on the size of the sector type, which it inadvertently
did before.

In my testing, this had negligible effect on performance, but may
provide more pages to other kernel needs during hibernation.

---------------------------------------
 kernel/power/swap.c |   15 +++++++++++----
 1 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index 8742fd0..6212351 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -51,6 +51,9 @@
 
 #define MAP_PAGE_ENTRIES	(PAGE_SIZE / sizeof(sector_t) - 1)
 
+/* Maximum number of pages for write buffering. */
+#define BUF_WRITE_PAGES		(1 << 15)
+
 struct swap_map_page {
 	sector_t entries[MAP_PAGE_ENTRIES];
 	sector_t next_swap;
@@ -316,7 +319,9 @@ static int get_swap_writer(struct swap_map_handle *handle)
 		goto err_rel;
 	}
 	handle->k = 0;
-	handle->nr_free_pages = nr_free_pages() >> 1;
+	handle->nr_free_pages = nr_free_pages() >> 2;
+	handle->nr_free_pages = clamp_val(handle->nr_free_pages,
+	                                  1, BUF_WRITE_PAGES);
 	handle->written = 0;
 	handle->first_sector = handle->cur_swap;
 	return 0;
@@ -404,7 +409,7 @@ static int swap_writer_finish(struct swap_map_handle *handle,
 #define LZO_THREADS	3
 
 /* Maximum number of pages for read buffering. */
-#define LZO_READ_PAGES	(MAP_PAGE_ENTRIES * 8)
+#define LZO_READ_PAGES	(1 << 12)
 
 
 /**
@@ -618,7 +623,9 @@ static int save_image_lzo(struct swap_map_handle *handle,
 	 * Adjust number of free pages after all allocations have been done.
 	 * We don't want to run out of pages when writing.
 	 */
-	handle->nr_free_pages = nr_free_pages() >> 1;
+	handle->nr_free_pages = nr_free_pages() >> 2;
+	handle->nr_free_pages = clamp_val(handle->nr_free_pages,
+	                                  1, BUF_WRITE_PAGES);
 
 	/*
 	 * Start the CRC32 thread.
@@ -1130,7 +1137,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
 	/*
 	 * Adjust number of pages for read buffering, in case we are short.
 	 */
-	read_pages = (nr_free_pages() - snapshot_get_image_size()) >> 1;
+	read_pages = (nr_free_pages() - snapshot_get_image_size()) >> 2;
 	read_pages = clamp_val(read_pages, LZO_CMP_PAGES, LZO_READ_PAGES);
 
 	for (i = 0; i < read_pages; i++) {
---------------------------------------

-- 
Bojan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists