lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151016155716.GF19597@dhcp22.suse.cz>
Date:	Fri, 16 Oct 2015 17:57:17 +0200
From:	Michal Hocko <mhocko@...nel.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
	David Rientjes <rientjes@...gle.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Kyle Walker <kwalker@...hat.com>,
	Christoph Lameter <cl@...ux.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Vladimir Davydov <vdavydov@...allels.com>,
	linux-mm <linux-mm@...ck.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Stanislav Kozina <skozina@...hat.com>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>
Subject: Re: Silent hang up caused by pages being not scanned?

On Thu 15-10-15 15:14:09, Michal Hocko wrote:
> On Tue 13-10-15 09:37:06, Linus Torvalds wrote:
[...]
> > Now, I realize the above suggestions are big changes, and they'll
> > likely break things and we'll still need to tweak things, but dammit,
> > wouldn't that be better than just randomly tweaking the insane
> > zone_reclaimable logic?
> 
> Yes zone_reclaimable is subtle and imho it is used even at the
> wrong level. We should decide whether we are really OOM at
> __alloc_pages_slowpath. We definitely need a big picture logic to tell
> us when it makes sense to drop the ball and trigger OOM killer or fail
> the allocation request.
> 
> E.g. free + reclaimable + writeback < min_wmark on all usable zones for
> more than X rounds of direct reclaim without any progress is
> a sufficient signal to go OOM. Costly/noretry allocations can fail earlier
> of course. This is obviously a half baked idea which needs much more
> consideration all I am trying to say is that we need a high level metric
> to tell OOM condition.

OK so here is what I am playing with currently. It is not complete
yet. Anyway I have tested it with 2 scenarios on a swapless system with
2G of RAM both do

$ cat writer.sh
#!/bin/sh
size=$((1<<30))
block=$((4<<10))

writer()
{
	(
        while true
        do
                dd if=/dev/zero of=/mnt/data/file.$1 bs=$block count=$(($size/$block))
                rm /mnt/data/file.$1
                sync
        done
	) &
}

writer 1
writer 2

sleep 10s # allow to accumulate enough dirty pages

1) massive OOM
start 100 memeaters each 80M run in parallel (anon private MAP_POPULATE
mapping). This will trigger many OOM killers and the overall count is
what I was interested in. The test is considered finished when we get
a steady state - writers can make progress and there is no more OOM
killing for some time.

$ grep "invoked oom-killer" base-run-oom.log | wc -l
78
$ grep "invoked oom-killer" test-run-oom.log | wc -l
63

So it looks like we have triggered less OOM killing with the patch
applied. I haven't checked those too closely but it seems like at least
two instances might not have triggered with the current implementation
because DMA32 zone is considered reclaimable. But this check is
inherently racy so we cannot be sure.
$ grep "DMA32.*all_unreclaimable? no" test2-run-oom.log | wc -l
2

2) almost OOM situation
invoke 10 memeaters in parallel and try to fill up all the memory
without triggering the OOM killer. This is quite hard and it required a
lot of tunning. I've ended up with:
#!/bin/sh
pkill mem_eater
sync
echo 3 > /proc/sys/vm/drop_caches
sync
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(16*1024)}' /proc/meminfo)
sh writer.sh &
sleep 10s
for i in $(seq 10)
do
        memcg_test/tools/mem_eater $size &
done

wait

and this one doesn't hit the OOM killer with the original implementation
while it hits it with the patch applied:
[   32.727001] DMA32 free:5428kB min:5532kB low:6912kB high:8296kB active_anon:1802520kB inactive_anon:204kB active_file:6692kB inactive_file:137184k
B unevictable:0kB isolated(anon):136kB isolated(file):32kB present:2080640kB managed:1997880kB mlocked:0kB dirty:0kB writeback:137168kB mapped:6408kB
 shmem:204kB slab_reclaimable:20472kB slab_unreclaimable:13276kB kernel_stack:1456kB pagetables:4756kB unstable:0kB bounce:0kB free_pcp:120kB local_p
cp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:948764 all_unreclaimable? yes

There is a lot of memory in the writeback but all_unreclaimable is yes
so who knows maybe it is just a coincidence we haven't triggered OOM in
the original kernel.

Anyway the two implementation will be hard to compare because workloads
are very different but I think something like below should be more
readable and deterministic than what we have right now. It will need
some more tuning for sure and I will be playing with it some more. I
would just like to hear opinions whether this approach makes sense.
If yes I will post it separately in a new thread for a wider discussion.
This email thread seems to be full of detours already.
---
>From e8620185cc1139cd47cee64a7e6b96e9a7c92d25 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@...e.com>
Date: Fri, 16 Oct 2015 14:34:50 +0200
Subject: [PATCH] mm, oom: refactor oom detection

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from pre mature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.
zone_reclaimable will allow to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to
follow. Retrying makes sense only if the currently reclaimable memory
(pages on reclaimable LRUs + writeback pages) + free pages would allow
the current allocation request to succeed (as per __zone_watermark_ok)
at least for one zone.
This alone wouldn't be sufficient because the writeback might get stuck
and reclaimable pages might be pinned for a really long time or even
depend on the current allocation context. Therefore there is a feedback
mechanism implemented which reduces the reclaim target after each
reclaim round without any progress. This means that we should eventually
converge to only NR_FREE_PAGES and fail on the wmark check and proceed
to OOM. The nice aspect of this approach is that it works independently
on the allocation order. The backoff is simple and linear with 1/16 of
the reclaimable pages for each round without any progress. We are
optimistic and reset counter for successful reclaim rounds.

TODO: what are we going to do with __GFP_REPEAT? We should try harder
but how much harder? Do we even need it? Opportunistic high order
allocations can use __GFP_NORETRY...

Signed-off-by: Michal Hocko <mhocko@...e.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 66 +++++++++++++++++++++++++++++++++++++++++++++-------
 mm/vmscan.c          | 10 +-------
 3 files changed, 59 insertions(+), 18 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9c7c4b418498..8298e1dc20f9 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c73913648357..ae927c762917 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2979,11 +2986,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3135,23 +3144,62 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
-	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs + those
+	 * currently on writeback as usable but make sure we converge to
+	 * OOM if we cannot make any progress after multiple consecutive
+	 * attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long writeback = zone_page_state(zone, NR_WRITEBACK);
+		unsigned long reclaimable;
+		unsigned long target;
+
+		reclaimable = zone_reclaimable_pages(zone) +
+			      zone_page_state(zone, NR_ISOLATED_FILE) +
+			      zone_page_state(zone, NR_ISOLATED_ANON);
+		target = reclaimable + writeback;
+		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 * We might over-account here because some pages under writeback
+		 * might be on the LRU as well but that shouldn't confuse us too
+		 * much.
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
+	/* TODO what about GFP_REPEAT */
+
 	/* Reclaim has failed us, start killing things */
 	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
 	if (page)
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c88d74ad9304..bc14217acd47 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -193,7 +193,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2639,10 +2639,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 
 		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
 			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
 	}
 
 	/*
@@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.1

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ