[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <790660d8826b95b3b6bd5d7ee0c7510dc70b1a58.1386571280.git.vdavydov@parallels.com>
Date: Mon, 9 Dec 2013 12:05:57 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: <dchinner@...hat.com>, <hannes@...xchg.org>, <mhocko@...e.cz>,
<akpm@...ux-foundation.org>
CC: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<cgroups@...r.kernel.org>, <devel@...nvz.org>,
<glommer@...nvz.org>, <glommer@...il.com>,
<vdavydov@...allels.com>, Balbir Singh <bsingharora@...il.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: [PATCH v13 16/16] memcg: flush memcg items upon memcg destruction
From: Glauber Costa <glommer@...nvz.org>
When a memcg is destroyed, it won't be imediately released until all
objects are gone. This means that if a memcg is restarted with the very
same workload - a very common case, the objects already cached won't be
billed to the new memcg. This is mostly undesirable since a container
can exploit this by restarting itself every time it reaches its limit,
and then coming up again with a fresh new limit.
Since now we have targeted reclaim, I sustain that we should assume that
a memcg that is destroyed should be flushed away. It makes perfect sense
if we assume that a memcg that goes away most likely indicates an
isolated workload that is terminated.
Signed-off-by: Glauber Costa <glommer@...nvz.org>
Signed-off-by: Vladimir Davydov <vdavydov@...allels.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.cz>
Cc: Balbir Singh <bsingharora@...il.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
---
mm/memcontrol.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 182199f..65ef284 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6171,12 +6171,40 @@ static void memcg_destroy_kmem(struct mem_cgroup *memcg)
memcg_destroy_all_lrus(memcg);
}
+static void memcg_drop_slab(struct mem_cgroup *memcg)
+{
+ struct shrink_control shrink = {
+ .gfp_mask = GFP_KERNEL,
+ .target_mem_cgroup = memcg,
+ };
+ unsigned long nr_objects;
+
+ nodes_setall(shrink.nodes_to_scan);
+ do {
+ nr_objects = shrink_slab(&shrink, 1000, 1000);
+ } while (nr_objects > 0);
+}
+
static void kmem_cgroup_css_offline(struct mem_cgroup *memcg)
{
if (!memcg_kmem_is_active(memcg))
return;
/*
+ * When a memcg is destroyed, it won't be imediately released until all
+ * objects are gone. This means that if a memcg is restarted with the
+ * very same workload - a very common case, the objects already cached
+ * won't be billed to the new memcg. This is mostly undesirable since a
+ * container can exploit this by restarting itself every time it
+ * reaches its limit, and then coming up again with a fresh new limit.
+ *
+ * Therefore a memcg that is destroyed should be flushed away. It makes
+ * perfect sense if we assume that a memcg that goes away indicates an
+ * isolated workload that is terminated.
+ */
+ memcg_drop_slab(memcg);
+
+ /*
* kmem charges can outlive the cgroup. In the case of slab
* pages, for instance, a page contain objects from various
* processes. As we prevent from taking a reference for every
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists