[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190222174337.26390-5-aryabinin@virtuozzo.com>
Date: Fri, 22 Feb 2019 20:43:37 +0300
From: Andrey Ryabinin <aryabinin@...tuozzo.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Rik van Riel <riel@...riel.com>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH 5/5] mm/vmscan: don't forcely shrink active anon lru list
shrink_node_memcg() always forcely shrink active anon list.
This doesn't seem like correct behavior. If system/memcg has no swap, it's
absolutely pointless to rebalance anon lru lists.
And in case we did scan the active anon list above, it's unclear why would
we need this additional force scan. If there are cases when we want more
aggressive scan of the anon lru we should just change the scan target
in get_scan_count() (and better explain such cases in the comments).
Remove this force shrink and let get_scan_count() to decide how
much of active anon we want to shrink.
Signed-off-by: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Rik van Riel <riel@...riel.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>
---
mm/vmscan.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 07f74e9507b6..efd10d6b9510 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2563,8 +2563,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
sc->priority == DEF_PRIORITY);
blk_start_plug(&plug);
- while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
- nr[LRU_INACTIVE_FILE]) {
+ while (nr[LRU_ACTIVE_ANON] || nr[LRU_INACTIVE_ANON] ||
+ nr[LRU_ACTIVE_FILE] || nr[LRU_INACTIVE_FILE]) {
unsigned long nr_anon, nr_file, percentage;
unsigned long nr_scanned;
@@ -2636,14 +2636,6 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
}
blk_finish_plug(&plug);
sc->nr_reclaimed += nr_reclaimed;
-
- /*
- * Even if we did not try to evict anon pages at all, we want to
- * rebalance the anon lru active/inactive ratio.
- */
- if (inactive_list_is_low(lruvec, false, memcg, sc, true))
- shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
- sc, LRU_ACTIVE_ANON);
}
/* Use reclaim/compaction for costly allocs or under memory pressure */
--
2.19.2
Powered by blists - more mailing lists