[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140624075011.GD4836@js1304-P5Q-DELUXE>
Date: Tue, 24 Jun 2014 16:50:11 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Vladimir Davydov <vdavydov@...allels.com>
Cc: akpm@...ux-foundation.org, cl@...ux.com, rientjes@...gle.com,
penberg@...nel.org, hannes@...xchg.org, mhocko@...e.cz,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH -mm v3 7/8] slub: make dead memcg caches discard free
slabs immediately
On Fri, Jun 13, 2014 at 12:38:21AM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of empty slabs for such caches,
> otherwise they will be hanging around forever.
>
> This patch makes SLUB discard dead memcg caches' slabs as soon as they
> become empty. To achieve that, it disables per cpu partial lists for
> dead caches (see put_cpu_partial) and forbids keeping empty slabs on per
> node partial lists by setting cache's min_partial to 0 on
> kmem_cache_shrink, which is always called on memcg offline (see
> memcg_unregister_all_caches).
>
> Signed-off-by: Vladimir Davydov <vdavydov@...allels.com>
> Thanks-to: Joonsoo Kim <iamjoonsoo.kim@....com>
> ---
> mm/slub.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 52565a9426ef..0d2d1978e62c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2064,6 +2064,14 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>
> } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
> != oldpage);
> +
> + if (memcg_cache_dead(s)) {
> + unsigned long flags;
> +
> + local_irq_save(flags);
> + unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> + local_irq_restore(flags);
> + }
> #endif
> }
>
> @@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
> kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
> unsigned long flags;
>
> + if (memcg_cache_dead(s))
> + s->min_partial = 0;
> +
> if (!slabs_by_inuse) {
> /*
> * Do not fail shrinking empty slabs if allocation of the
I think that you should move down n->nr_partial test after holding the
lock in __kmem_cache_shrink(). Access to n->nr_partial without node lock
is racy and you can see wrong value. It results in skipping to free empty
slab so your destroying logic could fail.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists