[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4d9e18ea-3100-8311-e969-a376096a0b60@suse.cz>
Date: Tue, 26 May 2020 12:12:59 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
kernel-team@...com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 12/19] mm: memcg/slab: use a single set of kmem_caches
for all accounted allocations
On 4/22/20 10:47 PM, Roman Gushchin wrote:
> This is fairly big but mostly red patch, which makes all accounted
> slab allocations use a single set of kmem_caches instead of
> creating a separate set for each memory cgroup.
>
> Because the number of non-root kmem_caches is now capped by the number
> of root kmem_caches, there is no need to shrink or destroy them
> prematurely. They can be perfectly destroyed together with their
> root counterparts. This allows to dramatically simplify the
> management of non-root kmem_caches and delete a ton of code.
>
> This patch performs the following changes:
> 1) introduces memcg_params.memcg_cache pointer to represent the
> kmem_cache which will be used for all non-root allocations
> 2) reuses the existing memcg kmem_cache creation mechanism
> to create memcg kmem_cache on the first allocation attempt
> 3) memcg kmem_caches are named <kmemcache_name>-memcg,
> e.g. dentry-memcg
> 4) simplifies memcg_kmem_get_cache() to just return memcg kmem_cache
> or schedule it's creation and return the root cache
> 5) removes almost all non-root kmem_cache management code
> (separate refcounter, reparenting, shrinking, etc)
> 6) makes slab debugfs to display root_mem_cgroup css id and never
> show :dead and :deact flags in the memcg_slabinfo attribute.
>
> Following patches in the series will simplify the kmem_cache creation.
>
> Signed-off-by: Roman Gushchin <guro@...com>
> ---
> include/linux/memcontrol.h | 5 +-
> include/linux/slab.h | 5 +-
> mm/memcontrol.c | 163 +++-----------
> mm/slab.c | 16 +-
> mm/slab.h | 145 ++++---------
> mm/slab_common.c | 426 ++++---------------------------------
> mm/slub.c | 38 +---
> 7 files changed, 128 insertions(+), 670 deletions(-)
Nice stats.
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
> @@ -548,17 +502,14 @@ static __always_inline int charge_slab_page(struct page *page,
> gfp_t gfp, int order,
> struct kmem_cache *s)
> {
> -#ifdef CONFIG_MEMCG_KMEM
Ah, indeed. Still, less churn if ref manipulation was done in
memcg_alloc/free_page_obj() ?
> if (!is_root_cache(s)) {
> int ret;
>
> ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
> if (ret)
> return ret;
> -
> - percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
> }
> -#endif
> +
> mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> PAGE_SIZE << order);
> return 0;
Powered by blists - more mailing lists