[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1403612916-26655-1-git-send-email-vdavydov@parallels.com>
Date: Tue, 24 Jun 2014 16:28:36 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: <akpm@...ux-foundation.org>
CC: <iamjoonsoo.kim@....com>, <cl@...ux.com>, <rientjes@...gle.com>,
<penberg@...nel.org>, <hannes@...xchg.org>, <mhocko@...e.cz>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: [PATCH -mm] slab: set free_limit for dead caches to 0
We mustn't keep empty slabs on dead memcg caches, because otherwise they
will never be destroyed.
Currently, we check if the cache is dead in free_block and drop empty
slab if so irrespective of the node's free_limit. This can be pretty
expensive. So let's avoid this additional check by zeroing nodes'
free_limit for dead caches on kmem_cache_shrink. This way no additional
overhead is added to free hot path.
Note, since ->free_limit can be updated on cpu/memory hotplug, we must
handle it properly there.
Signed-off-by: Vladimir Davydov <vdavydov@...allels.com>
Suggested-by: Joonsoo Kim <iamjoonsoo.kim@....com>
---
mm/slab.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index b35bf2120b96..6009e44a4d1d 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1155,11 +1155,13 @@ static int init_cache_node_node(int node)
cachep->node[node] = n;
}
- spin_lock_irq(&n->list_lock);
- n->free_limit =
- (1 + nr_cpus_node(node)) *
- cachep->batchcount + cachep->num;
- spin_unlock_irq(&n->list_lock);
+ if (!memcg_cache_dead(cachep)) {
+ spin_lock_irq(&n->list_lock);
+ n->free_limit =
+ (1 + nr_cpus_node(node)) *
+ cachep->batchcount + cachep->num;
+ spin_unlock_irq(&n->list_lock);
+ }
}
return 0;
}
@@ -1193,7 +1195,8 @@ static void cpuup_canceled(long cpu)
spin_lock_irq(&n->list_lock);
/* Free limit for this kmem_cache_node */
- n->free_limit -= cachep->batchcount;
+ if (!memcg_cache_dead(cachep))
+ n->free_limit -= cachep->batchcount;
if (nc)
free_block(cachep, nc->entry, nc->avail, node);
@@ -2544,6 +2547,12 @@ int __kmem_cache_shrink(struct kmem_cache *cachep)
check_irq_on();
for_each_kmem_cache_node(cachep, node, n) {
+ if (memcg_cache_dead(cachep)) {
+ spin_lock_irq(&n->list_lock);
+ n->free_limit = 0;
+ spin_unlock_irq(&n->list_lock);
+ }
+
drain_freelist(cachep, n, slabs_tofree(cachep, n));
ret += !list_empty(&n->slabs_full) ||
@@ -3426,8 +3435,7 @@ static void free_block(struct kmem_cache *cachep, void **objpp, int nr_objects,
/* fixup slab chains */
if (page->active == 0) {
- if (n->free_objects > n->free_limit ||
- memcg_cache_dead(cachep)) {
+ if (n->free_objects > n->free_limit) {
n->free_objects -= cachep->num;
/* No need to drop any previously held
* lock here, even if we have a off-slab slab
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists