[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1577669436.25204.8.camel@mtkswgap22>
Date: Mon, 30 Dec 2019 09:30:36 +0800
From: Miles Chen <miles.chen@...iatek.com>
To: Qian Cai <cai@....pw>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <linux-mediatek@...ts.infradead.org>,
<wsd_upstream@...iatek.com>
Subject: Re: [PATCH] mm/page_owner: print largest memory consumer when OOM
panic occurs
On Fri, 2019-12-27 at 08:46 -0500, Qian Cai wrote:
>
> > On Dec 27, 2019, at 2:44 AM, Miles Chen <miles.chen@...iatek.com> wrote:
> >
> > It's not complete situation.
> >
> > I've listed different OOM panic situations in previous email [1]
> > and what we can do about them with current information.
> >
> > There are some cases which cannot be covered by current information
> > easily.
> > For example: a memory leakage caused by alloc_pages() or vmalloc() with
> > a large size.
> > I keep seeing these issues for years and that's why I built this patch.
> > It's like a missing piece of the puzzle.
> >
> > To prove that the approach is practical and useful, I have collected
> > real test cases
> > under real devices and posted the test result in the commit message.
> > These are real cases, not my imagination.
>
> Of course this may help debug *your* problems in the past, but if that is the only requirement to merge the debugging patch like this, we would end up with endless of those. If your goal is to stop developers from reproducing issues unnecessarily again using page_owner to debug, then your patch does not help much for the majority of other developers’ issues.
>
> The page_owner is designed to give information about the top candidates that might cause issues, so it make somewhat sense if it dumps the top 10 greatest memory consumer for example, but that also clutter the OOM report so much, so it is no-go.
Yes, printing top 10 will be too much. That's why I print only the
greatest consumer, and test if this approach works.
I will resend this patch after the break. Let's wait for others'
comments?
Miles
Powered by blists - more mailing lists