[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201113073926.GB113119@shbuild999.sh.intel.com>
Date: Fri, 13 Nov 2020 15:39:26 +0800
From: Feng Tang <feng.tang@...el.com>
To: Waiman Long <longman@...hat.com>
Cc: Michal Hocko <mhocko@...e.com>,
Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeelb@...gle.com>,
Chris Down <chris@...isdown.name>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>, Tejun Heo <tj@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Yafang Shao <laoar.shao@...il.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
lkp@...el.com, zhengjun.xing@...el.com, ying.huang@...el.com
Subject: Re: [LKP] Re: [mm/memcg] bd0b230fe1: will-it-scale.per_process_ops
-22.7% regression
On Thu, Nov 12, 2020 at 11:43:45AM -0500, Waiman Long wrote:
> >>We tried below patch to make the 'page_counter' aligned.
> >> diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> >> index bab7e57..9efa6f7 100644
> >> --- a/include/linux/page_counter.h
> >> +++ b/include/linux/page_counter.h
> >> @@ -26,7 +26,7 @@ struct page_counter {
> >> /* legacy */
> >> unsigned long watermark;
> >> unsigned long failcnt;
> >> -};
> >> +} ____cacheline_internodealigned_in_smp;
> >>and with it, the -22.7% peformance change turns to a small -1.7%, which
> >>confirms the performance bump is caused by the change to data alignment.
> >>
> >>After the patch, size of 'page_counter' increases from 104 bytes to 128
> >>bytes, and the size of 'mem_cgroup' increases from 2880 bytes to 3008
> >>bytes(with our kernel config). Another major data structure which
> >>contains 'page_counter' is 'hugetlb_cgroup', whose size will change
> >>from 912B to 1024B.
> >>
> >>Should we make these page_counters aligned to reduce cacheline conflict?
> >I would rather focus on a more effective mem_cgroup layout. It is very
> >likely that we are just stumbling over two counters here.
> >
> >Could you try to add cache alignment of counters after memory and see
> >which one makes the difference? I do not expect memsw to be the one
> >because that one is used together with the main counter. But who knows
> >maybe the way it crosses the cache line has the exact effect. Hard to
> >tell without other numbers.
> >
> >Btw. it would be great to see what the effect is on cgroup v2 as well.
> >
> >Thanks for pursuing this!
>
> The contention may be in the page counters themselves or it can be in other
> fields below the page counters. The cacheline alignment will cause
> "high_work" just after the page counters to start at a cacheline boundary. I
> will try removing the cacheline alignment in the page counter and add it to
> high_work to see there is any change in performance. If there is no change,
> the performance problem will not be in the page counters.
Yes, that's a good spot to check. I even doubt it could be other members of
'struct mem_cgroup', which affects the benchmark, as we've seen some other
performance bump which is possibly related to it too.
Thanks,
Feng
Powered by blists - more mailing lists