lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 17 May 2024 19:13:58 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Yafang Shao <laoar.shao@...il.com>
Cc: Shakeel Butt <shakeel.butt@...ux.dev>,
	Yosry Ahmed <yosryahmed@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Muchun Song <muchun.song@...ux.dev>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...nel.org>,
	Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, gthelen@...gle.coma,
	rientjes@...gle.com
Subject: Re: [PATCH rfc 0/9] mm: memcg: separate legacy cgroup v1 code and
 put under config option

On Fri, May 17, 2024 at 10:21:01AM +0800, Yafang Shao wrote:
> On Fri, May 17, 2024 at 1:29 AM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
> >
> > On Thu, May 16, 2024 at 11:35:57AM +0800, Yafang Shao wrote:
> > > On Thu, May 9, 2024 at 2:33 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> > > >
> > > > On Wed, May 08, 2024 at 08:41:29PM -0700, Roman Gushchin wrote:
> > > > > Cgroups v2 have been around for a while and many users have fully adopted them,
> > > > > so they never use cgroups v1 features and functionality. Yet they have to "pay"
> > > > > for the cgroup v1 support anyway:
> > > > > 1) the kernel binary contains useless cgroup v1 code,
> > > > > 2) some common structures like task_struct and mem_cgroup have never used
> > > > >    cgroup v1-specific members,
> > > > > 3) some code paths have additional checks which are not needed.
> > > > >
> > > > > Cgroup v1's memory controller has a number of features that are not supported
> > > > > by cgroup v2 and their implementation is pretty much self contained.
> > > > > Most notably, these features are: soft limit reclaim, oom handling in userspace,
> > > > > complicated event notification system, charge migration.
> > > > >
> > > > > Cgroup v1-specific code in memcontrol.c is close to 4k lines in size and it's
> > > > > intervened with generic and cgroup v2-specific code. It's a burden on
> > > > > developers and maintainers.
> > > > >
> > > > > This patchset aims to solve these problems by:
> > > > > 1) moving cgroup v1-specific memcg code to the new mm/memcontrol-v1.c file,
> > > > > 2) putting definitions shared by memcontrol.c and memcontrol-v1.c into the
> > > > >    mm/internal.h header
> > > > > 3) introducing the CONFIG_MEMCG_V1 config option, turned on by default
> > > > > 4) making memcontrol-v1.c to compile only if CONFIG_MEMCG_V1 is set
> > > > > 5) putting unused struct memory_cgroup and task_struct members under
> > > > >    CONFIG_MEMCG_V1 as well.
> > > > >
> > > > > This is an RFC version, which is not 100% polished yet, so but it would be great
> > > > > to discuss and agree on the overall approach.
> > > > >
> > > > > Some open questions, opinions are appreciated:
> > > > > 1) I consider renaming non-static functions in memcontrol-v1.c to have
> > > > >    mem_cgroup_v1_ prefix. Is this a good idea?
> > > > > 2) Do we want to extend it beyond the memory controller? Should
> > > > > 3) Is it better to use a new include/linux/memcontrol-v1.h instead of
> > > > >    mm/internal.h? Or mm/memcontrol-v1.h.
> > > > >
> > > >
> > > > Hi Roman,
> > > >
> > > > A very timely and important topic and we should definitely talk about it
> > > > during LSFMM as well. I have been thinking about this problem for quite
> > > > sometime and I am getting more and more convinced that we should aim to
> > > > completely deprecate memcg-v1.
> > > >
> > > > More specifically:
> > > >
> > > > 1. What are the memcg-v1 features which have no alternative in memcg-v2
> > > > and are blocker for memcg-v1 users? (setting aside the cgroup v2
> > > > structual restrictions)
> > > >
> > > > 2. What are unused memcg-v1 features which we should start deprecating?
> > > >
> > > > IMO we should systematically start deprecating memcg-v1 features and
> > > > start unblocking the users stuck on memcg-v1.
> > > >
> > > > Now regarding the proposal in this series, I think it can be a first
> > > > step but should not give an impression that we are done. The only
> > > > concern I have is the potential of "out of sight, out of mind" situation
> > > > with this change but if we keep the momentum of deprecation of memcg-v1
> > > > it should be fine.
> > > >
> > > > I have CCed Greg and David from Google to get their opinion on what
> > > > memcg-v1 features are blocker for their memcg-v2 migration and if they
> > > > have concern in deprecation of memcg-v1 features.
> > > >
> > > > Anyone else still on memcg-v1, please do provide your input.
> > >
> > > Hi Shakeel,
> > >
> > > Hopefully I'm not too late.  We are currently using memcg v1.
> > >
> > > One specific feature we rely on in v1 is skmem accounting. In v1, we
> > > account for TCP memory usage without charging it to memcg v1, which is
> > > useful for monitoring the TCP memory usage generated by tasks running
> > > in a container. However, in memcg v2, monitoring TCP memory requires
> > > charging it to the container, which can easily cause OOM issues. It
> > > would be better if we could monitor skmem usage without charging it in
> > > the memcg v2, allowing us to account for it without the risk of
> > > triggering OOM conditions.
> >
> > Hi Yafang,
> >
> > the data itself is available on cgroup v2 in memory.stat:sock, however
> > you're right, it's charged on pair with other types of memory. It was
> > one of the main principles of cgroup v2's memory controller, so I don't
> > think it can be changed.
> >
> > So the feature you need is not skmem accounting, but something quite
> > opposite :)
> >
> > The question I have here: what makes socket memory different here?
> >
> > Is it something specific to your setup (e.g. you mostly use memory.max
> > to protect against memory leaks in the userspace code, but socket memory
> > spikes are always caused by external traffic and are legit) or we have
> > more fundamental problems with the socket memory handling, e.g. we can't
> > effectively reclaim it under the memory pressure?
> 
> It is the first case.
> 
> >
> > In the first case you can maintain a ~2-lines non-upstream patch which will
> > disable the charging while maintaining statistics - it's not a perfect, but
> > likely the best option here. In the second case we need collectively fix it
> > for cgroup v2.
> >
> 
> Thank you for your advice. Currently, we do not have any immediate
> plans to migrate to cgroup v2. If we are required to use cgroup v2 in
> the future, we will need to maintain non-upstream patches.
> 
> By the way, is there any reason we cannot keep this behavior
> consistent with memcg v1 in the upstream kernel? That would save us
> from having to maintain it locally.

The idea to handle various types of memory independently isn't working well
for most users: it makes the configuration trickier and more fragile.
It's also more expensive in terms of the accounting overhead.

The tcpmem accounting is btw quite expensive by itself. So by switching to
cgroup v2 you might see (depending on your traffic and cpu load) some
nice performance benefits.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ