[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <75328f78-117b-5042-f42e-5ef01dd93086@suse.cz>
Date: Mon, 25 May 2020 18:10:55 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
kernel-team@...com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 09/19] mm: memcg/slab: charge individual slab objects
instead of pages
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> Switch to per-object accounting of non-root slab objects.
>
> Charging is performed using obj_cgroup API in the pre_alloc hook.
> Obj_cgroup is charged with the size of the object and the size
> of metadata: as now it's the size of an obj_cgroup pointer.
> If the amount of memory has been charged successfully, the actual
> allocation code is executed. Otherwise, -ENOMEM is returned.
>
> In the post_alloc hook if the actual allocation succeeded,
> corresponding vmstats are bumped and the obj_cgroup pointer is saved.
> Otherwise, the charge is canceled.
>
> On the free path obj_cgroup pointer is obtained and used to uncharge
> the size of the releasing object.
>
> Memcg and lruvec counters are now representing only memory used
> by active slab objects and do not include the free space. The free
> space is shared and doesn't belong to any specific cgroup.
>
> Global per-node slab vmstats are still modified from (un)charge_slab_page()
> functions. The idea is to keep all slab pages accounted as slab pages
> on system level.
>
> Signed-off-by: Roman Gushchin <guro@...com>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Suggestion below:
> @@ -568,32 +548,33 @@ static __always_inline int charge_slab_page(struct page *page,
> gfp_t gfp, int order,
> struct kmem_cache *s)
> {
> - int ret;
> -
> - if (is_root_cache(s)) {
> - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> - PAGE_SIZE << order);
> - return 0;
> - }
> +#ifdef CONFIG_MEMCG_KMEM
> + if (!is_root_cache(s)) {
This could also benefit from memcg_kmem_enabled() static key test AFAICS. Maybe
even have a wrapper for both tests together?
> + int ret;
>
> - ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
> - if (ret)
> - return ret;
> + ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
You created memcg_alloc_page_obj_cgroups() empty variant for !CONFIG_MEMCG_KMEM
but now the only caller is under CONFIG_MEMCG_KMEM.
> + if (ret)
> + return ret;
>
> - return memcg_charge_slab(page, gfp, order, s);
> + percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
Perhaps moving this refcount into memcg_alloc_page_obj_cgroups() (maybe the name
should be different then) will allow you to not add #ifdef CONFIG_MEMCG_KMEM in
this function.
Maybe this is all moot after patch 12/19, will find out :)
> + }
> +#endif
> + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> + PAGE_SIZE << order);
> + return 0;
> }
>
> static __always_inline void uncharge_slab_page(struct page *page, int order,
> struct kmem_cache *s)
> {
> - if (is_root_cache(s)) {
> - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> - -(PAGE_SIZE << order));
> - return;
> +#ifdef CONFIG_MEMCG_KMEM
> + if (!is_root_cache(s)) {
Everything from above also applies here.
> + memcg_free_page_obj_cgroups(page);
> + percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
> }
> -
> - memcg_free_page_obj_cgroups(page);
> - memcg_uncharge_slab(page, order, s);
> +#endif
> + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> + -(PAGE_SIZE << order));
> }
>
> static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
Powered by blists - more mailing lists