[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210104025314.GA32269@shbuild999.sh.intel.com>
Date: Mon, 4 Jan 2021 10:53:14 +0800
From: Feng Tang <feng.tang@...el.com>
To: Roman Gushchin <guro@...com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, andi.kleen@...el.com,
tim.c.chen@...el.com, dave.hansen@...el.com, ying.huang@...el.com,
Shakeel Butt <shakeelb@...gle.com>
Subject: Re: [PATCH 2/2] mm: memcg: add a new MEMCG_UPDATE_BATCH
Hi Roman,
On Tue, Dec 29, 2020 at 09:13:27AM -0800, Roman Gushchin wrote:
> On Tue, Dec 29, 2020 at 10:35:14PM +0800, Feng Tang wrote:
> > When profiling memory cgroup involved benchmarking, status update
> > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH
> > is used for both charging and statistics/events updating, and is
> > set to 32, which may be good for accuracy of memcg charging, but
> > too small for stats update which causes concurrent access to global
> > stats data instead of per-cpu ones.
> >
> > So handle them differently, by adding a new bigger batch number
> > for stats updating, while keeping the value for charging (though
> > comments in memcontrol.h suggests to consider a bigger value too)
> >
> > The new batch is set to 512, which considers 2MB huge pages (512
> > pages), as the check logic mostly is:
> >
> > if (x > BATCH), then skip updating global data
> >
> > so it will save 50% global data updating for 2MB pages
> >
> > Following are some performance data with the patch, against
> > v5.11-rc1, on several generations of Xeon platforms. Each category
> > below has several subcases run on different platform, and only the
> > worst and best scores are listed:
> >
> > fio: +2.0% ~ +6.8%
> > will-it-scale/malloc: -0.9% ~ +6.2%
> > will-it-scale/page_fault1: no change
> > will-it-scale/page_fault2: +13.7% ~ +26.2%
>
> I wonder if there are any wins noticeable in the real world?
> Lowering the accuracy of statistics makes harder to interpret it,
> so it should be very well justified.
This is a valid concern. I only had test results for fio,
will-it-scale and vm-scalability (mostly impovements) so far,
and I will try to run on some Redis/RockDB like workload. I have
seen hotspots related with memcg statistics counting in some
customers' report, which is part of the motivation of the patch.
> 512 * nr_cpus is a large number.
I also tested 128, 256, 2048, 4096, which all show similar gains
with the benchmarks above, and 512 is chosed for 2MB pages. 128
could be less harmful for accuracy.
> >
> > One thought is it could be dynamically calculated according to
> > memcg limit and number of CPUs, and another is to add a periodic
> > syncing of the data for accuracy reason similar to vmstat, as
> > suggested by Ying.
>
> It sounds good to me, but it's quite tricky to implement properly,
> given that thee number of cgroups can be really big. It makes the
> traversing of the whole cgroup tree and syncing stats quite expensive,
> so it will not be easy to find a good balance.
Agreed. Also could you shed some light about how these statistics
data are used, so that we can better understand the usage.
Thanks again for the valuable feedback!
- Feng
> Thanks!
Powered by blists - more mailing lists