[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YOhh0IabpRk/W/qR@cmpxchg.org>
Date: Fri, 9 Jul 2021 10:48:48 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: tj@...nel.org, mhocko@...nel.org, vdavydov.dev@...il.com,
akpm@...ux-foundation.org, shakeelb@...gle.com, guro@...com,
songmuchun@...edance.com, shy828301@...il.com, alexs@...nel.org,
alexander.h.duyck@...ux.intel.com, richard.weiyang@...il.com,
vbabka@...e.cz, axboe@...nel.dk, iamjoonsoo.kim@....com,
david@...hat.com, willy@...radead.org, apopple@...dia.com,
minchan@...nel.org, linmiaohe@...wei.com,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, kernel-team@...roid.com
Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to
improve disabled memcg config
On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
Sounds reasonable to me as well. One comment:
> @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> page_counter_read(&memcg->memory);
> }
>
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> +
> +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> + gfp_t gfp);
> +/**
> + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * @page: page to charge
> + * @mm: mm context of the victim
> + * @gfp_mask: reclaim mode
> + *
> + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> + * charge to the active memcg.
> + *
> + * Do not use this for pages allocated for swapin.
> + *
> + * Returns 0 on success. Otherwise, an error code is returned.
> + */
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> + gfp_t gfp_mask)
> +{
> + struct mem_cgroup *memcg;
> + int ret;
> +
> + if (mem_cgroup_disabled())
> + return 0;
> +
> + memcg = get_mem_cgroup_from_mm(mm);
> + ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> + css_put(&memcg->css);
> +
> + return ret;
Why not do
int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask)
{
if (mem_cgroup_disabled())
return 0;
return __mem_cgroup_charge(page, memcg, gfp_mask);
}
like in the other cases as well?
That would avoid inlining two separate function calls into all the
callsites...
There is an (internal) __mem_cgroup_charge() already, but you can
rename it charge_memcg().
Powered by blists - more mailing lists