[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1492683865-27549-1-git-send-email-rabin.vincent@axis.com>
Date: Thu, 20 Apr 2017 12:24:25 +0200
From: Rabin Vincent <rabin.vincent@...s.com>
To: akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Rabin Vincent <rabinv@...s.com>,
Ming Ling <ming.ling@...eadtrum.com>,
Michal Hocko <mhocko@...e.com>,
Minchan Kim <minchan@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>, stable@...r.kernel.org
Subject: [PATCH] mm: prevent NR_ISOLATE_* stats from going negative
From: Rabin Vincent <rabinv@...s.com>
Commit 6afcf8ef0ca0 ("mm, compaction: fix NR_ISOLATED_* stats for pfn
based migration") moved the dec_node_page_state() call (along with the
page_is_file_cache() call) to after putback_lru_page(). But
page_is_file_cache() can change after putback_lru_page() is called, so
it should be called before putback_lru_page(), as it was before that
patch, to prevent NR_ISOLATE_* stats from going negative.
Without this fix, non-CONFIG_SMP kernels end up hanging in the
while(too_many_isolated()) { congestion_wait() } loop in
shrink_active_list() due to the negative stats.
Mem-Info:
active_anon:32567 inactive_anon:121 isolated_anon:1
active_file:6066 inactive_file:6639 isolated_file:4294967295
^^^^^^^^^^
unevictable:0 dirty:115 writeback:0 unstable:0
slab_reclaimable:2086 slab_unreclaimable:3167
mapped:3398 shmem:18366 pagetables:1145 bounce:0
free:1798 free_pcp:13 free_cma:0
Fixes: 6afcf8ef0ca0 ("mm, compaction: fix NR_ISOLATED_* stats for pfn based migration")
Cc: Ming Ling <ming.ling@...eadtrum.com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Minchan Kim <minchan@...nel.org>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: <stable@...r.kernel.org>
Signed-off-by: Rabin Vincent <rabinv@...s.com>
---
mm/migrate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index ed97c2c..738f1d5 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -184,9 +184,9 @@ void putback_movable_pages(struct list_head *l)
unlock_page(page);
put_page(page);
} else {
- putback_lru_page(page);
dec_node_page_state(page, NR_ISOLATED_ANON +
page_is_file_cache(page));
+ putback_lru_page(page);
}
}
}
--
2.7.0
Powered by blists - more mailing lists