lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CH3PR11MB73454C44EC8BCD43685BCB58FC749@CH3PR11MB7345.namprd11.prod.outlook.com>
Date: Thu, 11 May 2023 00:53:26 +0000
From: "Zhang, Cathy" <cathy.zhang@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Eric Dumazet <edumazet@...gle.com>, Linux MM <linux-mm@...ck.org>, Cgroups
	<cgroups@...r.kernel.org>, Paolo Abeni <pabeni@...hat.com>,
	"davem@...emloft.net" <davem@...emloft.net>, "kuba@...nel.org"
	<kuba@...nel.org>, "Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
	"Srinivas, Suresh" <suresh.srinivas@...el.com>, "Chen, Tim C"
	<tim.c.chen@...el.com>, "You, Lizhen" <lizhen.you@...el.com>,
	"eric.dumazet@...il.com" <eric.dumazet@...il.com>, "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>
Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
 size



> -----Original Message-----
> From: Shakeel Butt <shakeelb@...gle.com>
> Sent: Thursday, May 11, 2023 3:00 AM
> To: Zhang, Cathy <cathy.zhang@...el.com>
> Cc: Eric Dumazet <edumazet@...gle.com>; Linux MM <linux-
> mm@...ck.org>; Cgroups <cgroups@...r.kernel.org>; Paolo Abeni
> <pabeni@...hat.com>; davem@...emloft.net; kuba@...nel.org;
> Brandeburg, Jesse <jesse.brandeburg@...el.com>; Srinivas, Suresh
> <suresh.srinivas@...el.com>; Chen, Tim C <tim.c.chen@...el.com>; You,
> Lizhen <lizhen.you@...el.com>; eric.dumazet@...il.com;
> netdev@...r.kernel.org
> Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> size
> 
> On Wed, May 10, 2023 at 9:09 AM Zhang, Cathy <cathy.zhang@...el.com>
> wrote:
> >
> >
> [...]
> > > > >
> > > > > Have you tried to increase batch sizes ?
> > > >
> > > > I jus picked up 256 and 1024 for a try, but no help, the overhead still
> exists.
> > >
> > > This makes no sense at all.
> >
> > Eric,
> >
> > I added a pr_info in try_charge_memcg() to print nr_pages if nr_pages
> > >= MEMCG_CHARGE_BATCH, except it prints 64 during the initialization
> > of instances, there is no other output during the running. That means
> > nr_pages is not over 64, I guess that might be the reason why to
> > increase MEMCG_CHARGE_BATCH doesn't affect this case.
> >
> 
> I am assuming you increased MEMCG_CHARGE_BATCH to 256 and 1024 but
> that did not help. To me that just means there is a different bottleneck in the
> memcg charging codepath. Can you please share the perf profile? Please
> note that memcg charging does a lot of other things as well like updating
> memcg stats and checking (and enforcing) memory.high even if you have not
> set memory.high.

Thanks Shakeel! I will check more details on what you mentioned. We use
"sudo perf top -p $(docker inspect -f '{{.State.Pid}}' memcached_2)" to monitor
one of those instances, and also use "sudo perf top" to check the overhead from
system wide.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ