[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1569974210-55366-1-git-send-email-yang.shi@linux.alibaba.com>
Date: Wed, 2 Oct 2019 07:56:50 +0800
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: kirill.shutemov@...ux.intel.com, ktkhai@...tuozzo.com,
hannes@...xchg.org, mhocko@...e.com, hughd@...gle.com,
shakeelb@...gle.com, rientjes@...gle.com, akpm@...ux-foundation.org
Cc: yang.shi@...ux.alibaba.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm thp: shrink deferred split THPs harder
The deferred split THPs may get accumulated with some workloads, they
would get shrunk when memory pressure is hit. Now we use DEFAULT_SEEKS
to determine how many objects would get scanned then split if possible,
but actually they are not like other system cache objects, i.e. inode
cache which would incur extra I/O if over reclaimed, the unmapped pages
will not be accessed anymore, so we could shrink them more aggressively.
We could shrink THPs more pro-actively even though memory pressure is not
hit, however, IMHO waiting for memory pressure is still a good
compromise and trade-off. And, we do have simpler ways to shrink these
objects harder until we have to take other means do pro-actively drain.
Change shrinker->seeks to 0 to shrink deferred split THPs harder.
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Kirill Tkhai <ktkhai@...tuozzo.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Shakeel Butt <shakeelb@...gle.com>
Cc: David Rientjes <rientjes@...gle.com>
Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3b78910..1d6b1f1 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2955,7 +2955,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
static struct shrinker deferred_split_shrinker = {
.count_objects = deferred_split_count,
.scan_objects = deferred_split_scan,
- .seeks = DEFAULT_SEEKS,
+ .seeks = 0,
.flags = SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE |
SHRINKER_NONSLAB,
};
--
1.8.3.1
Powered by blists - more mailing lists