lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CALvZod5sbwXYqPZavojs1cvspxZv1iFHBG8=LQGNodinLXVL=w@mail.gmail.com> Date: Thu, 11 May 2023 10:10:28 -0700 From: Shakeel Butt <shakeelb@...gle.com> To: Eric Dumazet <edumazet@...gle.com> Cc: "Zhang, Cathy" <cathy.zhang@...el.com>, Linux MM <linux-mm@...ck.org>, Cgroups <cgroups@...r.kernel.org>, Paolo Abeni <pabeni@...hat.com>, "davem@...emloft.net" <davem@...emloft.net>, "kuba@...nel.org" <kuba@...nel.org>, "Brandeburg, Jesse" <jesse.brandeburg@...el.com>, "Srinivas, Suresh" <suresh.srinivas@...el.com>, "Chen, Tim C" <tim.c.chen@...el.com>, "You, Lizhen" <lizhen.you@...el.com>, "eric.dumazet@...il.com" <eric.dumazet@...il.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org> Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper size On Thu, May 11, 2023 at 9:35 AM Eric Dumazet <edumazet@...gle.com> wrote: > [...] > > The suspect part is really: > > > 8.98% mc-worker [kernel.vmlinux] [k] page_counter_cancel > > | > > --8.97%--page_counter_cancel > > | > > --8.97%--page_counter_uncharge > > drain_stock > > __refill_stock > > refill_stock > > | > > --8.91%--try_charge_memcg > > mem_cgroup_charge_skmem > > | > > --8.91%--__sk_mem_raise_allocated > > __sk_mem_schedule > > Shakeel, networking has a per-cpu cache, of +/- 1MB. > > Even with asymmetric alloc/free, this would mean that a 100Gbit NIC > would require something like 25,000 > operations on the shared cache line per second. > > Hardly an issue I think. > > memcg does not seem to have an equivalent strategy ? memcg has +256KiB per-cpu cache (note the absence of '-'). However it seems like Cathy already tested with 4MiB (1024 page batch) which is comparable to networking per-cpu cache (i.e. 2MiB window) and still see the issue. Additionally this is a single machine test (no NIC), so, I am kind of contemplating between (1) this is not real world workload and thus ignore or (2) implement asymmetric charge/uncharge strategy for memcg.
Powered by blists - more mailing lists