[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180119132544.19569-2-aryabinin@virtuozzo.com>
Date: Fri, 19 Jan 2018 16:25:44 +0300
From: Andrey Ryabinin <aryabinin@...tuozzo.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Andrey Ryabinin <aryabinin@...tuozzo.com>,
Shakeel Butt <shakeelb@...gle.com>,
Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>
Subject: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit()
Currently mem_cgroup_resize_limit() retries to set limit after reclaiming
32 pages. It makes more sense to reclaim needed amount of pages right away.
This works noticeably faster, especially if 'usage - limit' big.
E.g. bringing down limit from 4G to 50M:
Before:
# perf stat echo 50M > memory.limit_in_bytes
Performance counter stats for 'echo 50M':
386.582382 task-clock (msec) # 0.835 CPUs utilized
2,502 context-switches # 0.006 M/sec
0.463244382 seconds time elapsed
After:
# perf stat echo 50M > memory.limit_in_bytes
Performance counter stats for 'echo 50M':
169.403906 task-clock (msec) # 0.849 CPUs utilized
14 context-switches # 0.083 K/sec
0.199536900 seconds time elapsed
Signed-off-by: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: Shakeel Butt <shakeelb@...gle.com>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Vladimir Davydov <vdavydov.dev@...il.com>
---
mm/memcontrol.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 9d987f3e79dc..09bac2df2f12 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2448,6 +2448,7 @@ static DEFINE_MUTEX(memcg_limit_mutex);
static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
unsigned long limit, bool memsw)
{
+ unsigned long nr_pages;
bool enlarge = false;
int ret;
bool limits_invariant;
@@ -2479,8 +2480,9 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
if (!ret)
break;
- if (!try_to_free_mem_cgroup_pages(memcg, 1,
- GFP_KERNEL, !memsw)) {
+ nr_pages = max_t(long, 1, page_counter_read(counter) - limit);
+ if (!try_to_free_mem_cgroup_pages(memcg, nr_pages,
+ GFP_KERNEL, !memsw)) {
ret = -EBUSY;
break;
}
--
2.13.6
Powered by blists - more mailing lists