lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 May 2023 07:03:43 +0000
From: "Zhang, Cathy" <cathy.zhang@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>, "Chen, Tim C" <tim.c.chen@...el.com>
CC: "edumazet@...gle.com" <edumazet@...gle.com>, "davem@...emloft.net"
	<davem@...emloft.net>, "kuba@...nel.org" <kuba@...nel.org>,
	"pabeni@...hat.com" <pabeni@...hat.com>, "Brandeburg, Jesse"
	<jesse.brandeburg@...el.com>, "Srinivas, Suresh" <suresh.srinivas@...el.com>,
	"You, Lizhen" <lizhen.you@...el.com>, "eric.dumazet@...il.com"
	<eric.dumazet@...il.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>, "cgroups@...r.kernel.org"
	<cgroups@...r.kernel.org>
Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
 size



> -----Original Message-----
> From: Shakeel Butt <shakeelb@...gle.com>
> Sent: Wednesday, May 10, 2023 2:18 AM
> To: Chen, Tim C <tim.c.chen@...el.com>
> Cc: Zhang, Cathy <cathy.zhang@...el.com>; edumazet@...gle.com;
> davem@...emloft.net; kuba@...nel.org; pabeni@...hat.com; Brandeburg,
> Jesse <jesse.brandeburg@...el.com>; Srinivas, Suresh
> <suresh.srinivas@...el.com>; You, Lizhen <lizhen.you@...el.com>;
> eric.dumazet@...il.com; netdev@...r.kernel.org; linux-mm@...ck.org;
> cgroups@...r.kernel.org
> Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> size
> 
> On Tue, May 9, 2023 at 11:04 AM Chen, Tim C <tim.c.chen@...el.com> wrote:
> >
> > >>
> > >> Run memcached with memtier_benchamrk to verify the optimization
> > >> fix. 8 server-client pairs are created with bridge network on
> > >> localhost, server and client of the same pair share 28 logical CPUs.
> > >>
> > > >Results (Average for 5 run)
> > > >RPS (with/without patch)     +2.07x
> > > >
> >
> > >Do you have regression data from any production workload? Please keep
> in mind that many times we (MM subsystem) accepts the regressions of
> microbenchmarks over complicated optimizations. So, if there is a real
> production regression, please be very explicit about it.
> >
> > Though memcached is actually used by people in production. So this isn't
> an unrealistic scenario.
> >
> 
> Yes, memcached is used in production but I am not sure anyone runs 8 pairs
> of server and client on the same machine for production workload. Anyways,
> we can discuss, if needed, about the practicality of the benchmark after we
> have some impactful memcg optimizations.

The test is run on platform with 224 CPUs (HT enabled). It's not a must to run
8 pairs, the memcg charge hot paths can be observed if we run only one pair
but with more CPUs. Leverage all CPU resources on TCP connection to stress
contentions.

> 
> > Tim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ