[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220703104353.GB62281@shbuild999.sh.intel.com>
Date: Sun, 3 Jul 2022 18:43:53 +0800
From: Feng Tang <feng.tang@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Eric Dumazet <edumazet@...gle.com>, Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Muchun Song <songmuchun@...edance.com>,
Jakub Kicinski <kuba@...nel.org>,
Xin Long <lucien.xin@...il.com>,
Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
kernel test robot <oliver.sang@...el.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
network dev <netdev@...r.kernel.org>,
linux-s390@...r.kernel.org, MPTCP Upstream <mptcp@...ts.linux.dev>,
"linux-sctp @ vger . kernel . org" <linux-sctp@...r.kernel.org>,
lkp@...ts.01.org, kbuild test robot <lkp@...el.com>,
Huang Ying <ying.huang@...el.com>,
Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Yin Fengwei <fengwei.yin@...el.com>, Ying Xu <yinxu@...hat.com>
Subject: Re: [net] 4890b686f4: netperf.Throughput_Mbps -69.4% regression
Hi Shakeel,
On Fri, Jul 01, 2022 at 08:47:29AM -0700, Shakeel Butt wrote:
> On Mon, Jun 27, 2022 at 8:49 PM Feng Tang <feng.tang@...el.com> wrote:
> > I just tested it, it does perform better (the 4th is with your patch),
> > some perf-profile data is also listed.
> >
> > 7c80b038d23e1f4c 4890b686f4088c90432149bd6de 332b589c49656a45881bca4ecc0 e719635902654380b23ffce908d
> > ---------------- --------------------------- --------------------------- ---------------------------
> > 15722 -69.5% 4792 -40.8% 9300 -27.9% 11341 netperf.Throughput_Mbps
> >
> > 0.00 +0.3 0.26 ± 5% +0.5 0.51 +1.3 1.27 ± 2%pp.self.__sk_mem_raise_allocated
> > 0.00 +0.3 0.32 ± 15% +1.7 1.74 ± 2% +0.4 0.40 ± 2% pp.self.propagate_protected_usage
> > 0.00 +0.8 0.82 ± 7% +0.9 0.90 +0.8 0.84 pp.self.__mod_memcg_state
> > 0.00 +1.2 1.24 ± 4% +1.0 1.01 +1.4 1.44 pp.self.try_charge_memcg
> > 0.00 +2.1 2.06 +2.1 2.13 +2.1 2.11 pp.self.page_counter_uncharge
> > 0.00 +2.1 2.14 ± 4% +2.7 2.71 +2.6 2.60 ± 2% pp.self.page_counter_try_charge
> > 1.12 ± 4% +3.1 4.24 +1.1 2.22 +1.4 2.51 pp.self.native_queued_spin_lock_slowpath
> > 0.28 ± 9% +3.8 4.06 ± 4% +0.2 0.48 +0.4 0.68 pp.self.sctp_eat_data
> > 0.00 +8.2 8.23 +0.8 0.83 +1.3 1.26 pp.self.__sk_mem_reduce_allocated
> >
> > And the size of 'mem_cgroup' is increased from 4224 Bytes to 4608.
>
> Hi Feng, can you please try two more configurations? Take Eric's patch
> of adding ____cacheline_aligned_in_smp in page_counter and for first
> increase MEMCG_CHARGE_BATCH to 64 and for second increase it to 128.
> Basically batch increases combined with Eric's patch.
With increasing batch to 128, the regression could be reduced to -12.4%.
Some more details with perf-profile data below:
7c80b038d23e1f4c 4890b686f4088c90432149bd6de Eric's patch Eric's patch + batch-64 Eric's patch + batch-128
---------------- --------------------------- --------------------------- --------------------------- ---------------------------
15722 -69.5% 4792 -27.9% 11341 -14.0% 13521 -12.4% 13772 netperf.Throughput_Mbps
0.05 +0.2 0.27 ± 18% +0.0 0.08 ± 6% -0.1 0.00 -0.0 0.03 ±100% pp.self.timekeeping_max_deferment
0.00 +0.3 0.26 ± 5% +1.3 1.27 ± 2% +1.8 1.82 ± 10% +2.0 1.96 ± 9% pp.self.__sk_mem_raise_allocated
0.00 +0.3 0.32 ± 15% +0.4 0.40 ± 2% +0.1 0.10 ± 5% +0.0 0.00 pp.self.propagate_protected_usage
0.00 +0.8 0.82 ± 7% +0.8 0.84 +0.5 0.48 +0.4 0.36 ± 2% pp.self.__mod_memcg_state
0.00 +1.2 1.24 ± 4% +1.4 1.44 +0.4 0.40 ± 3% +0.2 0.24 ± 6% pp.self.try_charge_memcg
0.00 +2.1 2.06 +2.1 2.11 +0.5 0.50 +0.2 0.18 ± 8% pp.self.page_counter_uncharge
0.00 +2.1 2.14 ± 4% +2.6 2.60 ± 2% +0.6 0.58 +0.2 0.20 pp.self.page_counter_try_charge
1.12 ± 4% +3.1 4.24 +1.4 2.51 +1.0 2.10 ± 2% +1.0 2.10 ± 9% pp.self.native_queued_spin_lock_slowpath
0.28 ± 9% +3.8 4.06 ± 4% +0.4 0.68 +0.6 0.90 ± 9% +0.7 1.00 ± 11% pp.self.sctp_eat_data
0.00 +8.2 8.23 +1.3 1.26 +1.7 1.72 ± 6% +2.0 1.95 ± 10% pp.self.__sk_mem_reduce_allocated
Thanks,
Feng
Powered by blists - more mailing lists