[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220423004607.q4lbz2mplkhlbyhm@moria.home.lan>
Date: Fri, 22 Apr 2022 20:46:07 -0400
From: Kent Overstreet <kent.overstreet@...il.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org, hch@....de,
hannes@...xchg.org, akpm@...ux-foundation.org,
linux-clk@...r.kernel.org, linux-tegra@...r.kernel.org,
linux-input@...r.kernel.org, rostedt@...dmis.org
Subject: Re: [PATCH v2 8/8] mm: Centralize & improve oom reporting in
show_mem.c
On Fri, Apr 22, 2022 at 05:27:41PM -0700, Roman Gushchin wrote:
> You're scanning over a small portion of all shrinker lists (on a machine with
> cgroups), so the top-10 list has little value.
> Global ->count_objects() return the number of objects on the system/root_mem_cgroup
> level, not the shrinker's total.
Not quite following what you're saying here...?
If you're complaining that my current top-10-shrinker report isn't memcg aware,
that's valid - I can fix that.
> > In my experience, it's rare to be _so_ out of memory that small kmalloc
> > allocations are failing - we'll be triggering the show_mem() report before that
> > happens.
>
> I agree. However the OOM killer _has_ to make the progress even in such rare
> circumstances.
Oh, and the concern is allocator recursion? Yeah, that's a good point.
Do you know if using memalloc_noreclaim_(save|restore) is sufficient for that,
or do we want GFP_ATOMIC? I'm already using GFP_ATOMIC for allocations when we
generate the report on slabs, since we're taking the slab mutex there.
Powered by blists - more mailing lists