[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZsA8b9806Xl8AxLZ@host2.jankratochvil.net>
Date: Sat, 17 Aug 2024 14:00:15 +0800
From: Jan Kratochvil <jkratochvil@...l.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Michal Koutný <mkoutny@...e.com>,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Jonathan Corbet <corbet@....net>, Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH v5 0/3] Add memory.max.effective for application's
allocators
On Fri, 07 Jun 2024 02:15:00 +0800, Roman Gushchin wrote:
> If the goal is to detect how much memory would it be possible to allocate,
> I'm not sure that knowing all memory.max limits upper in the hierarchy
> really buys anything without knowing actual usages and a potential
> for memory reclaim across the entire tree.
>
> E.g.:
>
> A (max = 100G)
> | \
> B C
>
> C's effective max will come out as 100G, but if B.anon_usage = 100G and
> there is no swap, the actual number is 0.
Yes, it would be better to subtract the used memory from ancestor (and thus
even current) cgroups. The original use case of this feature is for cloud
nodes running a single Java JVM where the sibling cgroups are not an issue.
Jan Kratochvil
Powered by blists - more mailing lists