lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 May 2023 17:07:22 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: "Zhang, Cathy" <cathy.zhang@...el.com>
Cc: Shakeel Butt <shakeelb@...gle.com>, Linux MM <linux-mm@...ck.org>, 
	Cgroups <cgroups@...r.kernel.org>, Paolo Abeni <pabeni@...hat.com>, 
	"davem@...emloft.net" <davem@...emloft.net>, "kuba@...nel.org" <kuba@...nel.org>, 
	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>, "Srinivas, Suresh" <suresh.srinivas@...el.com>, 
	"Chen, Tim C" <tim.c.chen@...el.com>, "You, Lizhen" <lizhen.you@...el.com>, 
	"eric.dumazet@...il.com" <eric.dumazet@...il.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper size

On Wed, May 10, 2023 at 3:54 PM Zhang, Cathy <cathy.zhang@...el.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Eric Dumazet <edumazet@...gle.com>
> > Sent: Wednesday, May 10, 2023 7:25 PM
> > To: Zhang, Cathy <cathy.zhang@...el.com>
> > Cc: Shakeel Butt <shakeelb@...gle.com>; Linux MM <linux-mm@...ck.org>;
> > Cgroups <cgroups@...r.kernel.org>; Paolo Abeni <pabeni@...hat.com>;
> > davem@...emloft.net; kuba@...nel.org; Brandeburg, Jesse
> > <jesse.brandeburg@...el.com>; Srinivas, Suresh
> > <suresh.srinivas@...el.com>; Chen, Tim C <tim.c.chen@...el.com>; You,
> > Lizhen <lizhen.you@...el.com>; eric.dumazet@...il.com;
> > netdev@...r.kernel.org
> > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> > size
> >
> > On Wed, May 10, 2023 at 1:11 PM Zhang, Cathy <cathy.zhang@...el.com>
> > wrote:
> > >
> > > Hi Shakeel, Eric and all,
> > >
> > > How about adding memory pressure checking in sk_mem_uncharge() to
> > > decide if keep part of memory or not, which can help avoid the issue
> > > you fixed and the problem we find on the system with more CPUs.
> > >
> > > The code draft is like this:
> > >
> > > static inline void sk_mem_uncharge(struct sock *sk, int size) {
> > >         int reclaimable;
> > >         int reclaim_threshold = SK_RECLAIM_THRESHOLD;
> > >
> > >         if (!sk_has_account(sk))
> > >                 return;
> > >         sk->sk_forward_alloc += size;
> > >
> > >         if (mem_cgroup_sockets_enabled && sk->sk_memcg &&
> > >             mem_cgroup_under_socket_pressure(sk->sk_memcg)) {
> > >                 sk_mem_reclaim(sk);
> > >                 return;
> > >         }
> > >
> > >         reclaimable = sk->sk_forward_alloc -
> > > sk_unused_reserved_mem(sk);
> > >
> > >         if (reclaimable > reclaim_threshold) {
> > >                 reclaimable -= reclaim_threshold;
> > >                 __sk_mem_reclaim(sk, reclaimable);
> > >         }
> > > }
> > >
> > > I've run a test with the new code, the result looks good, it does not
> > > introduce latency, RPS is the same.
> > >
> >
> > It will not work for sockets that are idle, after a burst.
> > If we restore per socket caches, we will need a shrinker.
> > Trust me, we do not want that kind of big hammer, crushing latencies.
> >
> > Have you tried to increase batch sizes ?
>
> I jus picked up 256 and 1024 for a try, but no help, the overhead still exists.

This makes no sense at all.

I suspect a plain bug in mm/memcontrol.c

I will let mm experts work on this.

>
> >
> > Any kind of cache (even per-cpu) might need some adjustment when core
> > count or expected traffic is increasing.
> > This was somehow hinted in
> > commit 1813e51eece0ad6f4aacaeb738e7cced46feb470
> > Author: Shakeel Butt <shakeelb@...gle.com>
> > Date:   Thu Aug 25 00:05:06 2022 +0000
> >
> >     memcg: increase MEMCG_CHARGE_BATCH to 64
> >
> >
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index
> > 222d7370134c73e59fdbdf598ed8d66897dbbf1d..0418229d30c25d114132a1e
> > d46ac01358cf21424
> > 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -334,7 +334,7 @@ struct mem_cgroup {
> >   * TODO: maybe necessary to use big numbers in big irons or dynamic based
> > of the
> >   * workload.
> >   */
> > -#define MEMCG_CHARGE_BATCH 64U
> > +#define MEMCG_CHARGE_BATCH 128U
> >
> >  extern struct mem_cgroup *root_mem_cgroup;
> >
> > diff --git a/include/net/sock.h b/include/net/sock.h index
> > 656ea89f60ff90d600d16f40302000db64057c64..82f6a288be650f886b207e6a
> > 5e62a1d5dda808b0
> > 100644
> > --- a/include/net/sock.h
> > +++ b/include/net/sock.h
> > @@ -1433,8 +1433,8 @@ sk_memory_allocated(const struct sock *sk)
> >         return proto_memory_allocated(sk->sk_prot);
> >  }
> >
> > -/* 1 MB per cpu, in page units */
> > -#define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT))
> > +/* 2 MB per cpu, in page units */
> > +#define SK_MEMORY_PCPU_RESERVE (1 << (21 - PAGE_SHIFT))
> >
> >  static inline void
> >  sk_memory_allocated_add(struct sock *sk, int amt)
> >
> >
> >
> >
> >
> >
> > > > -----Original Message-----
> > > > From: Shakeel Butt <shakeelb@...gle.com>
> > > > Sent: Wednesday, May 10, 2023 12:10 AM
> > > > To: Eric Dumazet <edumazet@...gle.com>; Linux MM <linux-
> > > > mm@...ck.org>; Cgroups <cgroups@...r.kernel.org>
> > > > Cc: Zhang, Cathy <cathy.zhang@...el.com>; Paolo Abeni
> > > > <pabeni@...hat.com>; davem@...emloft.net; kuba@...nel.org;
> > > > Brandeburg, Jesse <jesse.brandeburg@...el.com>; Srinivas, Suresh
> > > > <suresh.srinivas@...el.com>; Chen, Tim C <tim.c.chen@...el.com>;
> > > > You, Lizhen <lizhen.you@...el.com>; eric.dumazet@...il.com;
> > > > netdev@...r.kernel.org
> > > > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as
> > > > a proper size
> > > >
> > > > +linux-mm & cgroup
> > > >
> > > > Thread: https://lore.kernel.org/all/20230508020801.10702-1-
> > > > cathy.zhang@...el.com/
> > > >
> > > > On Tue, May 9, 2023 at 8:43 AM Eric Dumazet <edumazet@...gle.com>
> > > > wrote:
> > > > >
> > > > [...]
> > > > > Some mm experts should chime in, this is not a networking issue.
> > > >
> > > > Most of the MM folks are busy in LSFMM this week. I will take a look
> > > > at this soon.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ