[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <zerazodfo2uu5az4s6vuwsgnk7esgjptygh5kdgxnb74o2lzjm@fkziy4ggxrxc>
Date: Mon, 8 Sep 2025 13:34:17 -0400
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Michal Hocko <mhocko@...e.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>,
Yueyang Pan <pyyjason@...il.com>, Shakeel Butt <shakeel.butt@...ux.dev>,
Usama Arif <usamaarif642@...il.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sourav Panda <souravpanda@...gle.com>, Pasha Tatashin <tatashin@...gle.com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [RFC 0/1] Try to add memory allocation info for cgroup oom kill
On Fri, Aug 29, 2025 at 08:35:08AM +0200, Michal Hocko wrote:
> On Tue 26-08-25 19:38:03, Suren Baghdasaryan wrote:
> > On Tue, Aug 26, 2025 at 7:06 AM Yueyang Pan <pyyjason@...il.com> wrote:
> > >
> > > On Thu, Aug 21, 2025 at 12:53:03PM -0700, Shakeel Butt wrote:
> > > > On Thu, Aug 21, 2025 at 12:18:00PM -0700, Yueyang Pan wrote:
> > > > > On Thu, Aug 21, 2025 at 11:35:19AM -0700, Shakeel Butt wrote:
> > > > > > On Thu, Aug 14, 2025 at 10:11:56AM -0700, Yueyang Pan wrote:
> > > > > > > Right now in the oom_kill_process if the oom is because of the cgroup
> > > > > > > limit, we won't get memory allocation infomation. In some cases, we
> > > > > > > can have a large cgroup workload running which dominates the machine.
> > > > > > > The reason using cgroup is to leave some resource for system. When this
> > > > > > > cgroup is killed, we would also like to have some memory allocation
> > > > > > > information for the whole server as well. This is reason behind this
> > > > > > > mini change. Is it an acceptable thing to do? Will it be too much
> > > > > > > information for people? I am happy with any suggestions!
> > > > > >
> > > > > > For a single patch, it is better to have all the context in the patch
> > > > > > and there is no need for cover letter.
> > > > >
> > > > > Thanks for your suggestion Shakeel! I will change this in the next version.
> > > > >
> > > > > >
> > > > > > What exact information you want on the memcg oom that will be helpful
> > > > > > for the users in general? You mentioned memory allocation information,
> > > > > > can you please elaborate a bit more.
> > > > > >
> > > > >
> > > > > As in my reply to Suren, I was thinking the system-wide memory usage info
> > > > > provided by show_free_pages and memory allocation profiling info can help
> > > > > us debug cgoom by comparing them with historical data. What is your take on
> > > > > this?
> > > > >
> > > >
> > > > I am not really sure about show_free_areas(). More specifically how the
> > > > historical data diff will be useful for a memcg oom. If you have a
> > > > concrete example, please give one. For memory allocation profiling, is
> > >
> > > Sorry for my late reply. I have been trying hard to think about a use case.
> > > One specific case I can think about is when there is no workload stacking,
> > > when one job is running solely on the machine. For example, memory allocation
> > > profiling can tell the memory usage of the network driver, which can make
> > > cg allocates memory harder and eventually leads to cgoom. Without this
> > > information, it would be hard to reason about what is happening in the kernel
> > > given increased oom number.
> > >
> > > show_free_areas() will give a summary of different types of memory which
> > > can possibably lead to increased cgoom in my previous case. Then one looks
> > > deeper via the memory allocation profiling as an entrypoint to debug.
> > >
> > > Does this make sense to you?
> >
> > I think if we had per-memcg memory profiling that would make sense.
> > Counters would reflect only allocations made by the processes from
> > that memcg and you could easily identify the allocation that caused
> > memcg to oom. But dumping system-wide profiling information at
> > memcg-oom time I think would not help you with this task. It will be
> > polluted with allocations from other memcgs, so likely won't help much
> > (unless there is some obvious leak or you know that a specific
> > allocation is done only by a process from your memcg and no other
> > process).
>
> I agree with Suren. It makes very little sense and in many cases it
> could be actively misleading to print global memory state on memcg OOMs.
> Not to mention that those events, unlike global OOMs, could happen much
> more often.
> If you are interested in a more information on memcg oom occurance you
> can detext OOM events and print whatever information you need.
"Misleading" is a concern; the show_mem report would want to print very
explicitly which information is specifically for the memcg and which is
global, and we don't do that now.
I don't think that means we shouldn't print it at all though, because it
can happen that we're in an OOM because one specific codepath is
allocating way more memory than we should be; even if the memory
allocation profiling info isn't correct for the memcg it'll be useful
information in a situation like that, it just needs to very clearly
state what it's reporting on.
I'm not sure we do that very well at all now, I'm looking at
__show_mem() ad it's not even passed a memcg. !?
Also, if anyone's thinking about "what if memory allocation profiling
was memcg aware" - the thing we saw when doing performance testing is
that memcg accounting was much higher overhead than memory allocation
profiling - hence, most kernel memory allocations don't even get memcg
accounting.
I think that got the memcg people looking at ways to make the accounting
cheaper, but I'm not sure if anything landed from that.
Powered by blists - more mailing lists