[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220627023812.GA29314@shbuild999.sh.intel.com>
Date: Mon, 27 Jun 2022 10:38:12 +0800
From: Feng Tang <feng.tang@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>,
Eric Dumazet <edumazet@...gle.com>
Cc: Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Muchun Song <songmuchun@...edance.com>,
Jakub Kicinski <kuba@...nel.org>,
Xin Long <lucien.xin@...il.com>,
Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
kernel test robot <oliver.sang@...el.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
network dev <netdev@...r.kernel.org>,
linux-s390@...r.kernel.org, MPTCP Upstream <mptcp@...ts.linux.dev>,
"linux-sctp @ vger . kernel . org" <linux-sctp@...r.kernel.org>,
lkp@...ts.01.org, kbuild test robot <lkp@...el.com>,
Huang Ying <ying.huang@...el.com>,
Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Yin Fengwei <fengwei.yin@...el.com>, Ying Xu <yinxu@...hat.com>
Subject: Re: [net] 4890b686f4: netperf.Throughput_Mbps -69.4% regression
On Sat, Jun 25, 2022 at 10:36:42AM +0800, Feng Tang wrote:
> On Fri, Jun 24, 2022 at 02:43:58PM +0000, Shakeel Butt wrote:
> > On Fri, Jun 24, 2022 at 03:06:56PM +0800, Feng Tang wrote:
> > > On Thu, Jun 23, 2022 at 11:34:15PM -0700, Shakeel Butt wrote:
> > [...]
> > > >
> > > > Feng, can you please explain the memcg setup on these test machines
> > > > and if the tests are run in root or non-root memcg?
> > >
> > > I don't know the exact setup, Philip/Oliver from 0Day can correct me.
> > >
> > > I logged into a test box which runs netperf test, and it seems to be
> > > cgoup v1 and non-root memcg. The netperf tasks all sit in dir:
> > > '/sys/fs/cgroup/memory/system.slice/lkp-bootstrap.service'
> > >
> >
> > Thanks Feng. Can you check the value of memory.kmem.tcp.max_usage_in_bytes
> > in /sys/fs/cgroup/memory/system.slice/lkp-bootstrap.service after making
> > sure that the netperf test has already run?
>
> memory.kmem.tcp.max_usage_in_bytes:0
Sorry, I made a mistake that in the original report from Oliver, it
was 'cgroup v2' with a 'debian-11.1' rootfs.
When you asked about cgroup info, I tried the job on another tbox, and
the original 'job.yaml' didn't work, so I kept the 'netperf' test
parameters and started a new job which somehow run with a 'debian-10.4'
rootfs and acutally run with cgroup v1.
And as you mentioned cgroup version does make a big difference, that
with v1, the regression is reduced to 1% ~ 5% on different generations
of test platforms. Eric mentioned they also got regression report,
but much smaller one, maybe it's due to the cgroup version?
Thanks,
Feng
> And here is more memcg stats (let me know if you want to check more)
>
> > If this is non-zero then network memory accounting is enabled and the
> > slowdown is expected.
>
> >From the perf-profile data in original report, both
> __sk_mem_raise_allocated() and __sk_mem_reduce_allocated() are called
> much more often, which call memcg charge/uncharge functions.
>
> IIUC, the call chain is:
>
> __sk_mem_raise_allocated
> sk_memory_allocated_add
> mem_cgroup_charge_skmem
> charge memcg->tcpmem (for cgroup v2)
> try_charge memcg (for v1)
>
> Also from Eric's one earlier commit log:
>
> "
> net: implement per-cpu reserves for memory_allocated
> ...
> This means we are going to call sk_memory_allocated_add()
> and sk_memory_allocated_sub() more often.
> ...
> "
>
> So this slowdown is related to the more calling of charge/uncharge?
>
> Thanks,
> Feng
>
> > > And the rootfs is a debian based rootfs
> > >
> > > Thanks,
> > > Feng
> > >
> > >
> > > > thanks,
> > > > Shakeel
Powered by blists - more mailing lists