[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201120143012.GB103521@shbuild999.sh.intel.com>
Date: Fri, 20 Nov 2020 22:30:12 +0800
From: Feng Tang <feng.tang@...el.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Waiman Long <longman@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeelb@...gle.com>,
Chris Down <chris@...isdown.name>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>, Tejun Heo <tj@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Yafang Shao <laoar.shao@...il.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
lkp@...el.com, zhengjun.xing@...el.com, ying.huang@...el.com
Subject: Re: [LKP] Re: [mm/memcg] bd0b230fe1: will-it-scale.per_process_ops
-22.7% regression
On Fri, Nov 20, 2020 at 02:19:44PM +0100, Michal Hocko wrote:
> On Fri 20-11-20 19:44:24, Feng Tang wrote:
> > On Fri, Nov 13, 2020 at 03:34:36PM +0800, Feng Tang wrote:
> > > On Thu, Nov 12, 2020 at 03:16:54PM +0100, Michal Hocko wrote:
> > > > > > > I add one phony page_counter after the union and re-test, the regression
> > > > > > > reduced to -1.2%. It looks like the regression caused by the data structure
> > > > > > > layout change.
> > > > > >
> > > > > > Thanks for double checking. Could you try to cache align the
> > > > > > page_counter struct? If that helps then we should figure which counters
> > > > > > acks against each other by adding the alignement between the respective
> > > > > > counters.
> > > > >
> > > > > We tried below patch to make the 'page_counter' aligned.
> > > > >
> > > > > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> > > > > index bab7e57..9efa6f7 100644
> > > > > --- a/include/linux/page_counter.h
> > > > > +++ b/include/linux/page_counter.h
> > > > > @@ -26,7 +26,7 @@ struct page_counter {
> > > > > /* legacy */
> > > > > unsigned long watermark;
> > > > > unsigned long failcnt;
> > > > > -};
> > > > > +} ____cacheline_internodealigned_in_smp;
> > > > >
> > > > > and with it, the -22.7% peformance change turns to a small -1.7%, which
> > > > > confirms the performance bump is caused by the change to data alignment.
> > > > >
> > > > > After the patch, size of 'page_counter' increases from 104 bytes to 128
> > > > > bytes, and the size of 'mem_cgroup' increases from 2880 bytes to 3008
> > > > > bytes(with our kernel config). Another major data structure which
> > > > > contains 'page_counter' is 'hugetlb_cgroup', whose size will change
> > > > > from 912B to 1024B.
> > > > >
> > > > > Should we make these page_counters aligned to reduce cacheline conflict?
> > > >
> > > > I would rather focus on a more effective mem_cgroup layout. It is very
> > > > likely that we are just stumbling over two counters here.
> > > >
> > > > Could you try to add cache alignment of counters after memory and see
> > > > which one makes the difference? I do not expect memsw to be the one
> > > > because that one is used together with the main counter. But who knows
> > > > maybe the way it crosses the cache line has the exact effect. Hard to
> > > > tell without other numbers.
> > >
> > > I added some alignments change around the 'memsw', but neither of them can
> > > restore the -22.7%. Following are some log showing how the alignments
> > > are:
> > >
> > > tl: memcg=0x7cd1000 memory=0x7cd10d0 memsw=0x7cd1140 kmem=0x7cd11b0 tcpmem=0x7cd1220
> > > t2: memcg=0x7cd0000 memory=0x7cd00d0 memsw=0x7cd0140 kmem=0x7cd01c0 tcpmem=0x7cd0230
> > >
> > > So both of the 'memsw' are aligned, but t2's 'kmem' is aligned while
> > > t1's is not.
> > >
> > > I will check more on the perf data about detailed hotspots.
> >
> > Some more check updates about it:
> >
> > Waiman's patch is effectively removing one 'struct page_counter' between
> > 'memory' and "memsw'. And the mem_cgroup is:
> >
> > struct mem_cgroup {
> >
> > ...
> >
> > struct page_counter memory; /* Both v1 & v2 */
> >
> > union {
> > struct page_counter swap; /* v2 only */
> > struct page_counter memsw; /* v1 only */
> > };
> >
> > /* Legacy consumer-oriented counters */
> > struct page_counter kmem; /* v1 only */
> > struct page_counter tcpmem; /* v1 only */
> >
> > ...
> > ...
> >
> > MEMCG_PADDING(_pad1_);
> >
> > atomic_t moving_account;
> > struct task_struct *move_lock_task;
> >
> > ...
> > };
> >
> >
> > I do experiments by inserting a 'page_counter' between 'memory'
> > and the 'MEMCG_PADDING(_pad1_)', no matter where I put it, the
> > benchmark result can be recovered from 145K to 185K, which is
> > really confusing, as adding a 'page_counter' right before the
> > '_pad1_' doesn't change cache alignment of any members.
>
> Have you checked the result of pahole before and after your modification
> whether something stands out?
I can not find any abnormal thing. (I attached pahole log for 2 kernels,
one's head commit is Waiman's patch, the other adds a page_counter before
the '_pad1_')
> Btw. is this reproducible an different CPU models?
This is a Haswell 4S platform. I've tried on Cascadelake 2S and 4S , which
has -7.7% and -4.2% regression, though the perf data shows the similar
changing trend.
Thanks,
Feng
>
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists