lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CH3PR11MB73458B537328CB4F96BE3497FC779@CH3PR11MB7345.namprd11.prod.outlook.com>
Date: Wed, 10 May 2023 07:32:03 +0000
From: "Zhang, Cathy" <cathy.zhang@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>, "Chen, Tim C" <tim.c.chen@...el.com>
CC: "edumazet@...gle.com" <edumazet@...gle.com>, "davem@...emloft.net"
	<davem@...emloft.net>, "kuba@...nel.org" <kuba@...nel.org>,
	"pabeni@...hat.com" <pabeni@...hat.com>, "Brandeburg, Jesse"
	<jesse.brandeburg@...el.com>, "Srinivas, Suresh" <suresh.srinivas@...el.com>,
	"You, Lizhen" <lizhen.you@...el.com>, "eric.dumazet@...il.com"
	<eric.dumazet@...il.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>, "cgroups@...r.kernel.org"
	<cgroups@...r.kernel.org>
Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
 size



> -----Original Message-----
> From: Zhang, Cathy
> Sent: Wednesday, May 10, 2023 3:04 PM
> To: Shakeel Butt <shakeelb@...gle.com>; Chen, Tim C
> <tim.c.chen@...el.com>
> Cc: edumazet@...gle.com; davem@...emloft.net; kuba@...nel.org;
> pabeni@...hat.com; Brandeburg, Jesse <jesse.brandeburg@...el.com>;
> Srinivas, Suresh <suresh.srinivas@...el.com>; You, Lizhen
> <Lizhen.You@...el.com>; eric.dumazet@...il.com; netdev@...r.kernel.org;
> linux-mm@...ck.org; cgroups@...r.kernel.org
> Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> size
> 
> 
> 
> > -----Original Message-----
> > From: Shakeel Butt <shakeelb@...gle.com>
> > Sent: Wednesday, May 10, 2023 2:18 AM
> > To: Chen, Tim C <tim.c.chen@...el.com>
> > Cc: Zhang, Cathy <cathy.zhang@...el.com>; edumazet@...gle.com;
> > davem@...emloft.net; kuba@...nel.org; pabeni@...hat.com;
> Brandeburg,
> > Jesse <jesse.brandeburg@...el.com>; Srinivas, Suresh
> > <suresh.srinivas@...el.com>; You, Lizhen <lizhen.you@...el.com>;
> > eric.dumazet@...il.com; netdev@...r.kernel.org; linux-mm@...ck.org;
> > cgroups@...r.kernel.org
> > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a
> > proper size
> >
> > On Tue, May 9, 2023 at 11:04 AM Chen, Tim C <tim.c.chen@...el.com>
> wrote:
> > >
> > > >>
> > > >> Run memcached with memtier_benchamrk to verify the optimization
> > > >> fix. 8 server-client pairs are created with bridge network on
> > > >> localhost, server and client of the same pair share 28 logical CPUs.
> > > >>
> > > > >Results (Average for 5 run)
> > > > >RPS (with/without patch)     +2.07x
> > > > >
> > >
> > > >Do you have regression data from any production workload? Please
> > > >keep
> > in mind that many times we (MM subsystem) accepts the regressions of
> > microbenchmarks over complicated optimizations. So, if there is a real
> > production regression, please be very explicit about it.
> > >
> > > Though memcached is actually used by people in production. So this
> > > isn't
> > an unrealistic scenario.
> > >
> >
> > Yes, memcached is used in production but I am not sure anyone runs 8
> > pairs of server and client on the same machine for production
> > workload. Anyways, we can discuss, if needed, about the practicality
> > of the benchmark after we have some impactful memcg optimizations.
> 
> The test is run on platform with 224 CPUs (HT enabled). It's not a must to run
> 8 pairs, the memcg charge hot paths can be observed if we run only one pair
> but with more CPUs. Leverage all CPU resources on TCP connection to stress
> contentions.

If we run less server-client pairs (<= 3), and each pair is with 28 CPUs shared,
that means <=84 CPUs actually run, there is no obvious memcg charge overhead
observed. But when we run more than 112 CPUs (>= 4 pairs) to stress the system
with TCP memory allocation, memcg charge will be the bottleneck.

> 
> >
> > > Tim

Powered by blists - more mailing lists