[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aKdu+o4JrxujCwt3@devbig569.cln6.facebook.com>
Date: Thu, 21 Aug 2025 12:09:46 -0700
From: Yueyang Pan <pyyjason@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Joshua Hahn <joshua.hahnjy@...il.com>,
Kent Overstreet <kent.overstreet@...ux.dev>,
Usama Arif <usamaarif642@...il.com>, Michal Hocko <mhocko@...e.com>,
David Rientjes <rientjes@...gle.com>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [RFC 1/1] Add memory allocation info for cgroup oom
On Wed, Aug 20, 2025 at 06:25:56PM -0700, Suren Baghdasaryan wrote:
> On Mon, Aug 18, 2025 at 7:24 AM Yueyang Pan <pyyjason@...il.com> wrote:
> >
> > On Thu, Aug 14, 2025 at 01:11:08PM -0700, Joshua Hahn wrote:
> > > On Thu, 14 Aug 2025 10:11:57 -0700 Yueyang Pan <pyyjason@...il.com> wrote:
> > >
> > > > Enable show_mem for the cgroup oom case. We will have memory allocation
> > > > information in such case for the machine.
>
> Memory allocations are only a part of show_mem(), so I would not call
> this change memory allocation profiling specific. The title and the
> changelog should be corrected to reflect exactly what is being done
> here - logging system in addition to cgroup memory state during cgroup
> oom-kill.
Thanks for your feedback Suren! I will change the title to be precise in
the next version.
> As for whether it makes sense to report system memory during cgroup
> oom-kill... I'm not too sure. Maybe people who use memcgs more
> extensively than what I've seen (in Android) can chime in?
>
In my opinion, the show_free_areas and memory allocation profiling data
can provide an entrypoint to understand what happens with cgroup oom. We
can also compare them with historical data to see if some memory usage
has a spike.
Feel free to critize me if I am not making sense.
>
> > >
> > > Hi Pan,
> > >
> > > Thank you for your patch! This makes sense to me. As for your concerns from the
> > > cover letter on whether this is too much information: personally I don't think
> > > so, but perhaps other developers will have different opinions?
> > >
> > > I just have a few comments / nits.
> >
> > Thanks for your comment, Joshua.
> >
> > >
> > > > Signed-off-by: Yueyang Pan <pyyjason@...il.com>
> > > > ---
> > > > mm/oom_kill.c | 4 +++-
> > > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > > > index 17650f0b516e..3ca224028396 100644
> > > > --- a/mm/oom_kill.c
> > > > +++ b/mm/oom_kill.c
> > > > @@ -465,8 +465,10 @@ static void dump_header(struct oom_control *oc)
> > > > pr_warn("COMPACTION is disabled!!!\n");
> > > >
> > > > dump_stack();
> > > > - if (is_memcg_oom(oc))
> > > > + if (is_memcg_oom(oc)) {
> > > > mem_cgroup_print_oom_meminfo(oc->memcg);
> > > > + show_mem();
> > >
> > > Below, there is a direct call to __show_mem, which limits node and zone
> > > filtering. I am wondering whether it would make sense to also call __show_mem
> > > with the same arguments? show_mem() is just a wrapper around __show_mem with
> > > default parameters (i.e. not filtering out nodes, not filtering out
> > > zones).
> >
> > The reason why I call show_mem here directly is because cgroup is not bound to
> > a specific zone or node (correctly me if I am wrong). Thus I simply invoke
> > show_mem to show system-wide memory info.
> >
> > >
> > > If you think this makes sense, we can even take it out of the if-else statement
> > > and call it unconditionally. But this is just my opinion, please feel free to
> > > keep the unfiltered call if you believe that fits better in here.
> > >
> > > > + }
> > >
> > > NIT: Should this closing brace be on the same line as the following else
> > > statement, as per the kernel style guide [1]
> >
> > Sorry for this. I will run checkpatch for my formal patch definitely
> >
> > >
> > > > else {
> > > > __show_mem(SHOW_MEM_FILTER_NODES, oc->nodemask, gfp_zone(oc->gfp_mask));
> > > > if (should_dump_unreclaim_slab())
> > > > --
> > > > 2.47.3
> > >
> > > Thanks again Pan, I hope you have a great day!
> > > Joshua
> > >
> > > [1] https://docs.kernel.org/process/coding-style.html
> > >
> > > Sent using hkml (https://github.com/sjp38/hackermail)
> >
> > Sorry that I forgot to cc some maintainers so I added them in this reply.
> > Pan
Thanks,
Pan
Powered by blists - more mailing lists