[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmEhXG8C7msGvhqL@dhcp22.suse.cz>
Date: Thu, 21 Apr 2022 11:18:20 +0200
From: Michal Hocko <mhocko@...e.com>
To: Kent Overstreet <kent.overstreet@...il.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, roman.gushchin@...ux.dev,
hannes@...xchg.org
Subject: Re: [PATCH 3/4] mm: Centralize & improve oom reporting in show_mem.c
On Wed 20-04-22 12:58:05, Kent Overstreet wrote:
> On Wed, Apr 20, 2022 at 08:58:36AM +0200, Michal Hocko wrote:
> > On Tue 19-04-22 16:32:01, Kent Overstreet wrote:
> > > This patch:
> > > - Moves lib/show_mem.c to mm/show_mem.c
> >
> > Sure, why not. Should be a separate patch.
> >
> > > - Changes show_mem() to always report on slab usage
> > > - Instead of reporting on all slabs, we only report on top 10 slabs,
> > > and in sorted order
> > > - Also reports on shrinkers, with the new shrinkers_to_text().
> >
> > Why do we need/want this? It would be also great to provide an example
> > of why the new output is better (in which cases) than the existing one.
>
> Did you read the cover letter to the patch series?
Nope, only this one made it into my inbox based on my filters. I usually
try to fish out other parts of the thread but I didn't this time.
Besides it is always better to have a full patch description explain not
only what has been changed but why as well.
> But sure, I can give you an example of the new output:
Calling out the changes would be really helpful, but I guess the crux
is here.
> 00177 16644 pages reserved
> 00177 Unreclaimable slab info:
> 00177 9p-fcall-cache total: 8.25 MiB active: 8.25 MiB
> 00177 kernfs_node_cache total: 2.15 MiB active: 2.15 MiB
> 00177 kmalloc-64 total: 2.08 MiB active: 2.07 MiB
> 00177 task_struct total: 1.95 MiB active: 1.95 MiB
> 00177 kmalloc-4k total: 1.50 MiB active: 1.50 MiB
> 00177 signal_cache total: 1.34 MiB active: 1.34 MiB
> 00177 kmalloc-2k total: 1.16 MiB active: 1.16 MiB
> 00177 bch_inode_info total: 1.02 MiB active: 922 KiB
> 00177 perf_event total: 1.02 MiB active: 1.02 MiB
> 00177 biovec-max total: 992 KiB active: 960 KiB
> 00177 Shrinkers:
> 00177 super_cache_scan: objects: 127
> 00177 super_cache_scan: objects: 106
> 00177 jbd2_journal_shrink_scan: objects: 32
> 00177 ext4_es_scan: objects: 32
> 00177 bch2_btree_cache_scan: objects: 8
> 00177 nr nodes: 24
> 00177 nr dirty: 0
> 00177 cannibalize lock: 0000000000000000
> 00177
> 00177 super_cache_scan: objects: 8
> 00177 super_cache_scan: objects: 1
How does this help to analyze this allocation failure?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists