lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1417956021-27298-1-git-send-email-vdavydov@parallels.com>
Date:	Sun, 7 Dec 2014 15:40:21 +0300
From:	Vladimir Davydov <vdavydov@...allels.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Johannes Weiner <hannes@...xchg.org>,
	Dave Chinner <david@...morbit.com>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: [PATCH -mm] mm: vmscan: shrink_zones: assure class zone is populated

Since commit 5df87d36a45e ("mm: vmscan: invoke slab shrinkers from
shrink_zone()") slab shrinkers are invoked from shrink_zone. Since slab
shrinkers lack the notion of memory zones, we only call slab shrinkers
after scanning the highest zone suitable for allocation (class zone).
However, class zone can be empty. E.g. if an x86_64 host has less than
4G of RAM, it will have only ZONE_DMA and ZONE_DMA32 populated while the
class zone for most allocations, ZONE_NORMAL, will be empty. As a
result, slab caches will not be scanned at all from the direct reclaim
path, which may result in premature OOM killer invocations.

Let's take the highest *populated* zone suitable for allocation for the
class zone to fix this issue.

Signed-off-by: Vladimir Davydov <vdavydov@...allels.com>
---
 mm/vmscan.c |   10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9130cf67bac1..5e8772b2b9ef 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2454,8 +2454,16 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist,
 					requested_highidx, sc->nodemask) {
+		enum zone_type classzone_idx;
+
 		if (!populated_zone(zone))
 			continue;
+
+		classzone_idx = requested_highidx;
+		while (!populated_zone(zone->zone_pgdat->node_zones +
+							classzone_idx))
+			classzone_idx--;
+
 		/*
 		 * Take care memory controller reclaiming has small influence
 		 * to global LRU.
@@ -2503,7 +2511,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 			/* need some check for avoid more shrink_zone() */
 		}
 
-		if (shrink_zone(zone, sc, zone_idx(zone) == requested_highidx))
+		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
 			reclaimable = true;
 
 		if (global_reclaim(sc) &&
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ