lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20230914212042.nnubjht3huiap3kk@google.com> Date: Thu, 14 Sep 2023 21:20:42 +0000 From: Shakeel Butt <shakeelb@...gle.com> To: Abel Wu <wuyun.abel@...edance.com> Cc: "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Andrew Morton <akpm@...ux-foundation.org>, Roman Gushchin <roman.gushchin@...ux.dev>, Michal Hocko <mhocko@...e.com>, Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosryahmed@...gle.com>, "Matthew Wilcox (Oracle)" <willy@...radead.org>, Yu Zhao <yuzhao@...gle.com>, Kefeng Wang <wangkefeng.wang@...wei.com>, Yafang Shao <laoar.shao@...il.com>, Kuniyuki Iwashima <kuniyu@...zon.com>, Martin KaFai Lau <martin.lau@...nel.org>, Breno Leitao <leitao@...ian.org>, Alexander Mikhalitsyn <alexander@...alicyn.com>, David Howells <dhowells@...hat.com>, Jason Xing <kernelxing@...cent.com>, open list <linux-kernel@...r.kernel.org>, "open list:NETWORKING [GENERAL]" <netdev@...r.kernel.org>, "open list:MEMORY MANAGEMENT" <linux-mm@...ck.org> Subject: Re: [RFC PATCH net-next 0/3] sock: Be aware of memcg pressure on alloc On Fri, Sep 01, 2023 at 02:21:25PM +0800, Abel Wu wrote: > [...] > As expected, no obvious performance gain or loss observed. As for the > issue we encountered, this patchset provides better worst-case behavior > that such OOM cases are reduced at some extent. While further fine- > grained traffic control is what the workloads need to think about. > I agree with the motivation but I don't agree with the solution (patch 2 and 3). This is adding one more heuristic in the code which you yourself described as helped to some extent. In addition adding more dependency on vmpressure subsystem which is in weird state. Vmpressure is a cgroup v1 feature which somehow networking subsystem is relying on for cgroup v2 deployments. In addition vmpressure acts differently for workloads with different memory types (mapped, mlocked, kernel memory). Anyways, have you explored the BPF based approach. You can induce socket pressure at the points you care about and define memory pressure however your use-case cares for. You can define memory pressure using PSI or vmpressure or maybe with MEMCG_HIGH events. What do you think? thanks, Shakeel
Powered by blists - more mailing lists