lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Feb 2016 22:50:54 -0500
From:	Rik van Riel <riel@...riel.com>
To:	linux-kernel@...r.kernel.org
Cc:	linux-mm@...ck.org, hannes@...xchg.org, akpm@...ux-foundation.org,
	mgorman@...e.de
Subject: [PATCH] mm,vmscan: compact memory from kswapd when lots of memory
 free already

If kswapd is woken up for a higher order allocation, for example
from alloc_skb, but the system already has lots of memory free,
kswapd_shrink_zone will rightfully decide kswapd should not free
any more memory.

However, at that point kswapd should proceed to compact memory, on
behalf of alloc_skb or others.

Currently kswapd will only compact memory if it first freed memory,
leading kswapd to never compact memory when there is already lots of
memory free.

On my home system, that lead to kswapd occasionally using up to 5%
CPU time, with many man wakeups from alloc_skb, and kswapd never
doing anything to relieve the situation that caused it to be woken
up.

Going ahead with compaction when kswapd did not attempt to reclaim
any memory, and as a consequence did not reclaim any memory, is the
right thing to do in this situation.

Signed-off-by: Rik van Riel <riel@...hat.com>
---
 mm/vmscan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 71b1c29948db..9566a04b9759 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3343,7 +3343,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
 		 * Compact if necessary and kswapd is reclaiming at least the
 		 * high watermark number of pages as requsted
 		 */
-		if (pgdat_needs_compaction && sc.nr_reclaimed > nr_attempted)
+		if (pgdat_needs_compaction && sc.nr_reclaimed >= nr_attempted)
 			compact_pgdat(pgdat, order);
 
 		/*
-- 
-- 
All rights reversed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ