[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c781641d-e5dc-f79b-a9c0-aa91ab228316@redhat.com>
Date: Mon, 31 Jan 2022 14:01:18 -0500
From: Waiman Long <longman@...hat.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Petr Mladek <pmladek@...e.com>,
Steven Rostedt <rostedt@...dmis.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, Ira Weiny <ira.weiny@...el.com>,
Rafael Aquini <aquini@...hat.com>
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information
On 1/31/22 04:38, Michal Hocko wrote:
> On Sat 29-01-22 15:53:15, Waiman Long wrote:
>> It was found that a number of offlined memcgs were not freed because
>> they were pinned by some charged pages that were present. Even "echo
>> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
>> offlined but not freed memcgs tend to increase in number over time with
>> the side effect that percpu memory consumption as shown in /proc/meminfo
>> also increases over time.
>>
>> In order to find out more information about those pages that pin
>> offlined memcgs, the page_owner feature is extended to dump memory
>> cgroup information especially whether the cgroup is offlined or not.
> It is not really clear to me how this is supposed to be used. Are you
> really dumping all the pages in the system to find out offline memcgs?
> That looks rather clumsy to me. I am not against adding memcg
> information to the page owner output. That can be useful in other
> contexts.
I am just piggybacking on top of the existing page_owner tool to provide
information for me to find out what pages are pinning the dead memcgs.
page_owner is a debugging tool that is not turned on by default. We do
have to add a kernel parameter and rebooting the system to use that,
but that is pretty easy to do once we have a reproducer to reproduce the
problem.
Cheers,
Longman
Powered by blists - more mailing lists