[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140610074317.GE19036@js1304-P5Q-DELUXE>
Date: Tue, 10 Jun 2014 16:43:17 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Vladimir Davydov <vdavydov@...allels.com>
Cc: akpm@...ux-foundation.org, cl@...ux.com, rientjes@...gle.com,
penberg@...nel.org, hannes@...xchg.org, mhocko@...e.cz,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH -mm v2 8/8] slab: make dead memcg caches discard free
slabs immediately
On Fri, Jun 06, 2014 at 05:22:45PM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of empty slabs for such caches,
> otherwise they will be hanging around forever.
>
> This patch makes SLAB discard dead memcg caches' slabs as soon as they
> become empty. To achieve that, it disables per cpu free object arrays by
> setting array_cache->limit to 0 on each cpu and sets per node free_limit
> to 0 in order to zap slabs_free lists. This is done on kmem_cache_shrink
> (in do_drain, drain_array, drain_alien_cache, and drain_freelist to be
> more exact), which is always called on memcg offline (see
> memcg_unregister_all_caches)
>
> Note, since array_cache->limit and kmem_cache_node->free_limit are per
> cpu/node and, as a result, they may be updated on cpu/node
> online/offline, we have to patch every place where the limits are
> initialized.
Hello,
You mentioned that disabling per cpu arrays would degrade performance.
But, this patch is implemented to disable per cpu arrays. Is there any
reason to do like this? How about not disabling per cpu arrays and
others? Leaving it as is makes the patch less intrusive and has low
impact on performance. I guess that amount of reclaimed memory has no
big difference between both approaches.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists