lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 Jul 2016 16:21:46 +0100
From:	Mel Gorman <mgorman@...hsingularity.net>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Johannes Weiner <hannes@...xchg.org>,
	Minchan Kim <minchan@...nel.org>,
	Michal Hocko <mhocko@...e.cz>,
	Vlastimil Babka <vbabka@...e.cz>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH 0/5] Candidate fixes for premature OOM kills with node-lru v1

Both Joonsoo Kim and Minchan Kim have reported premature OOM kills on
a 32-bit platform. The common element is a zone-constrained high-order
allocation failing. Two factors appear to be at fault -- pgdat being
considered unreclaimable prematurely and insufficient rotation of the
active list.

Unfortunately to date I have been unable to reproduce this with a variety
of stress workloads on a 2G 32-bit KVM instance. It's not clear why as
the steps are similar to what was described. It means I've been unable to
determine if this series addresses the problem or not. I'm hoping they can
test and report back before these are merged to mmotm. What I have checked
is that a basic parallel DD workload completed successfully on the same
machine I used for the node-lru performance tests. I'll leave the other
tests running just in case anything interesting falls out.

The series is in three basic parts;

Patch 1 does not account for skipped pages as scanned. This avoids the pgdat
	being prematurely marked unreclaimable

Patches 2-4 add per-zone stats back in. The actual stats patch is different
	to Minchan's as the original patch did not account for unevictable
	LRU which would corrupt counters. The second two patches remove
	approximations based on pgdat statistics. It's effectively a
	revert of "mm, vmstat: remove zone and node double accounting by
	approximating retries" but different LRU stats are used. This
	is better than a full revert or a reworking of the series as
	it preserves history of why the zone stats are necessary.

	If this work out, we may have to leave the double accounting in
	place for now until an alternative cheap solution presents itself.

Patch 5 rotates inactive/active lists for lowmem allocations. This is also
	quite different to Minchan's patch as the original patch did not
	account for memcg and would rotate if *any* eligible zone needed
	rotation which may rotate excessively. The new patch considers
	the ratio for all eligible zones which is more in line with
	node-lru in general.

 include/linux/mm_inline.h | 19 ++-------------
 include/linux/mmzone.h    |  7 ++++++
 include/linux/swap.h      |  1 +
 mm/compaction.c           | 20 +---------------
 mm/migrate.c              |  2 ++
 mm/page-writeback.c       | 17 +++++++-------
 mm/page_alloc.c           | 59 ++++++++++++++++------------------------------
 mm/vmscan.c               | 60 ++++++++++++++++++++++++++++++++++++++++++-----
 mm/vmstat.c               |  6 +++++
 9 files changed, 102 insertions(+), 89 deletions(-)

-- 
2.6.4

Powered by blists - more mailing lists