[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <09ea1749-8978-091b-7727-d86f8e6c49cc@redhat.com>
Date: Mon, 19 Apr 2021 19:42:07 -0400
From: Waiman Long <llong@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>, Roman Gushchin <guro@...com>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <songmuchun@...edance.com>,
Alex Shi <alex.shi@...ux.alibaba.com>,
Chris Down <chris@...isdown.name>,
Yafang Shao <laoar.shao@...il.com>,
Wei Yang <richard.weiyang@...il.com>,
Masayoshi Mizuma <msys.mizuma@...il.com>,
Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH v4 2/5] mm/memcg: Cache vmstat data in percpu
memcg_stock_pcp
On 4/19/21 12:38 PM, Johannes Weiner wrote:
> On Sun, Apr 18, 2021 at 08:00:29PM -0400, Waiman Long wrote:
>> Before the new slab memory controller with per object byte charging,
>> charging and vmstat data update happen only when new slab pages are
>> allocated or freed. Now they are done with every kmem_cache_alloc()
>> and kmem_cache_free(). This causes additional overhead for workloads
>> that generate a lot of alloc and free calls.
>>
>> The memcg_stock_pcp is used to cache byte charge for a specific
>> obj_cgroup to reduce that overhead. To further reducing it, this patch
>> makes the vmstat data cached in the memcg_stock_pcp structure as well
>> until it accumulates a page size worth of update or when other cached
>> data change. Caching the vmstat data in the per-cpu stock eliminates two
>> writes to non-hot cachelines for memcg specific as well as memcg-lruvecs
>> specific vmstat data by a write to a hot local stock cacheline.
>>
>> On a 2-socket Cascade Lake server with instrumentation enabled and this
>> patch applied, it was found that about 20% (634400 out of 3243830)
>> of the time when mod_objcg_state() is called leads to an actual call
>> to __mod_objcg_state() after initial boot. When doing parallel kernel
>> build, the figure was about 17% (24329265 out of 142512465). So caching
>> the vmstat data reduces the number of calls to __mod_objcg_state()
>> by more than 80%.
>>
>> Signed-off-by: Waiman Long <longman@...hat.com>
>> Reviewed-by: Shakeel Butt <shakeelb@...gle.com>
>> ---
>> mm/memcontrol.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++---
>> 1 file changed, 61 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index dc9032f28f2e..693453f95d99 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -2213,7 +2213,10 @@ struct memcg_stock_pcp {
>>
>> #ifdef CONFIG_MEMCG_KMEM
>> struct obj_cgroup *cached_objcg;
>> + struct pglist_data *cached_pgdat;
>> unsigned int nr_bytes;
>> + int vmstat_idx;
>> + int vmstat_bytes;
>> #endif
>>
>> struct work_struct work;
>> @@ -3150,8 +3153,9 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>> css_put(&memcg->css);
>> }
>>
>> -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>> - enum node_stat_item idx, int nr)
>> +static inline void __mod_objcg_state(struct obj_cgroup *objcg,
>> + struct pglist_data *pgdat,
>> + enum node_stat_item idx, int nr)
> This naming is dangerous, as the __mod_foo naming scheme we use
> everywhere else suggests it's the same function as mod_foo() just with
> preemption/irqs disabled.
>
I will change its name to, say, mod_objcg_mlstate() to indicate that it
is something different. Actually, it is hard to come up with a good name
which is not too long.
>> @@ -3159,10 +3163,53 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>> rcu_read_lock();
>> memcg = obj_cgroup_memcg(objcg);
>> lruvec = mem_cgroup_lruvec(memcg, pgdat);
>> - mod_memcg_lruvec_state(lruvec, idx, nr);
>> + __mod_memcg_lruvec_state(lruvec, idx, nr);
>> rcu_read_unlock();
>> }
>>
>> +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
>> + enum node_stat_item idx, int nr)
>> +{
>> + struct memcg_stock_pcp *stock;
>> + unsigned long flags;
>> +
>> + local_irq_save(flags);
>> + stock = this_cpu_ptr(&memcg_stock);
>> +
>> + /*
>> + * Save vmstat data in stock and skip vmstat array update unless
>> + * accumulating over a page of vmstat data or when pgdat or idx
>> + * changes.
>> + */
>> + if (stock->cached_objcg != objcg) {
>> + /* Output the current data as is */
> When you get here with the wrong objcg and hit the cold path, it's
> usually immediately followed by an uncharge -> refill_obj_stock() that
> will then flush and reset cached_objcg.
>
> Instead of doing two cold paths, why not flush the old objcg right
> away and set the new so that refill_obj_stock() can use the fast path?
That is a good idea. Will do that.
>
>> + } else if (!stock->vmstat_bytes) {
>> + /* Save the current data */
>> + stock->vmstat_bytes = nr;
>> + stock->vmstat_idx = idx;
>> + stock->cached_pgdat = pgdat;
>> + nr = 0;
>> + } else if ((stock->cached_pgdat != pgdat) ||
>> + (stock->vmstat_idx != idx)) {
>> + /* Output the cached data & save the current data */
>> + swap(nr, stock->vmstat_bytes);
>> + swap(idx, stock->vmstat_idx);
>> + swap(pgdat, stock->cached_pgdat);
> Is this optimization worth doing?
>
> You later split vmstat_bytes and idx doesn't change anymore.
I am going to merge patch 2 and patch 4 to avoid the confusion.
>
> How often does the pgdat change? This is a per-cpu cache after all,
> and the numa node a given cpu allocates from tends to not change that
> often. Even with interleaving mode, which I think is pretty rare, the
> interleaving happens at the slab/page level, not the object level, and
> the cache isn't bigger than a page anyway.
The testing done on a 2-socket system indicated that pgdat changes
roughly 10-20% of time. So it does happen, especially on the kfree()
path, I think. I have tried to cached vmstat update for those on the
local node only, but I got more misses with that. So I am just going to
change pgdat and flush out existing data for now.
>
>> + } else {
>> + stock->vmstat_bytes += nr;
>> + if (abs(stock->vmstat_bytes) > PAGE_SIZE) {
>> + nr = stock->vmstat_bytes;
>> + stock->vmstat_bytes = 0;
>> + } else {
>> + nr = 0;
>> + }
> ..and this is the regular overflow handling done by the objcg and
> memcg charge stock as well.
>
> How about this?
>
> if (stock->cached_objcg != objcg ||
> stock->cached_pgdat != pgdat ||
> stock->vmstat_idx != idx) {
> drain_obj_stock(stock);
> obj_cgroup_get(objcg);
> stock->cached_objcg = objcg;
> stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0);
> stock->vmstat_idx = idx;
> }
> stock->vmstat_bytes += nr_bytes;
>
> if (abs(stock->vmstat_bytes > PAGE_SIZE))
> drain_obj_stock(stock);
>
> (Maybe we could be clever, here since the charge and stat caches are
> the same size: don't flush an oversized charge cache from
> refill_obj_stock in the charge path, but leave it to the
> mod_objcg_state() that follows; likewise don't flush an undersized
> vmstat stock from mod_objcg_state() in the uncharge path, but leave it
> to the refill_obj_stock() that follows. Could get a bit complicated...)
If you look at patch 5, I am trying to avoid doing drain_obj_stock()
unless the objcg change. I am going to do the same here.
Cheers,
Longman
Powered by blists - more mailing lists