[<prev] [next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1609232024340.2495@eggly.anvils>
Date: Fri, 23 Sep 2016 20:27:04 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH 3/3] mm: delete unnecessary and unsafe init_tlb_ubc()
init_tlb_ubc() looked unnecessary to me: tlb_ubc is statically initialized
with zeroes in the init_task, and copied from parent to child while it is
quiescent in arch_dup_task_struct(); so I went to delete it.
But inserted temporary debug WARN_ONs in place of init_tlb_ubc() to check
that it was always empty at that point, and found them firing: because
memcg reclaim can recurse into global reclaim (when allocating biosets
for swapout in my case), and arrive back at the init_tlb_ubc() in
shrink_node_memcg().
Resetting tlb_ubc.flush_required at that point is wrong: if the upper
level needs a deferred TLB flush, but the lower level turns out not to,
we miss a TLB flush. But fortunately, that's the only part of the
protocol that does not nest: with the initialization removed, cpumask
collects bits from upper and lower levels, and flushes TLB when needed.
Fixes: 72b252aed506 ("mm: send one IPI per CPU to TLB flush all entries after unmapping pages")
Signed-off-by: Hugh Dickins <hughd@...gle.com>
Acked-by: Mel Gorman <mgorman@...hsingularity.net>
Cc: stable@...r.kernel.org # 4.3+
---
mm/vmscan.c | 19 -------------------
1 file changed, 19 deletions(-)
--- 4.8-rc7/mm/vmscan.c 2016-09-05 16:42:52.496692429 -0700
+++ linux/mm/vmscan.c 2016-09-22 09:32:37.900894833 -0700
@@ -2303,23 +2303,6 @@ out:
}
}
-#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
-static void init_tlb_ubc(void)
-{
- /*
- * This deliberately does not clear the cpumask as it's expensive
- * and unnecessary. If there happens to be data in there then the
- * first SWAP_CLUSTER_MAX pages will send an unnecessary IPI and
- * then will be cleared.
- */
- current->tlb_ubc.flush_required = false;
-}
-#else
-static inline void init_tlb_ubc(void)
-{
-}
-#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
-
/*
* This is a basic per-node page freer. Used by both kswapd and direct reclaim.
*/
@@ -2355,8 +2338,6 @@ static void shrink_node_memcg(struct pgl
scan_adjusted = (global_reclaim(sc) && !current_is_kswapd() &&
sc->priority == DEF_PRIORITY);
- init_tlb_ubc();
-
blk_start_plug(&plug);
while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
nr[LRU_INACTIVE_FILE]) {
Powered by blists - more mailing lists