lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZWhEawxI1CT8stu9@tiehlicka>
Date:   Thu, 30 Nov 2023 09:14:35 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Kent Overstreet <kent.overstreet@...ux.dev>
Cc:     Roman Gushchin <roman.gushchin@...ux.dev>,
        Qi Zheng <zhengqi.arch@...edance.com>,
        Muchun Song <muchun.song@...ux.dev>,
        Linux-MM <linux-mm@...ck.org>, linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH 2/7] mm: shrinker: Add a .to_text() method for shrinkers

On Wed 29-11-23 18:11:47, Kent Overstreet wrote:
> On Wed, Nov 29, 2023 at 10:14:54AM +0100, Michal Hocko wrote:
> > On Tue 28-11-23 16:34:35, Roman Gushchin wrote:
> > > On Tue, Nov 28, 2023 at 02:23:36PM +0800, Qi Zheng wrote:
> > [...]
> > > > Now I think adding this method might not be a good idea. If we allow
> > > > shrinkers to report thier own private information, OOM logs may become
> > > > cluttered. Most people only care about some general information when
> > > > troubleshooting OOM problem, but not the private information of a
> > > > shrinker.
> > > 
> > > I agree with that.
> > > 
> > > It seems that the feature is mostly useful for kernel developers and it's easily
> > > achievable by attaching a bpf program to the oom handler. If it requires a bit
> > > of work on the bpf side, we can do that instead, but probably not. And this
> > > solution can potentially provide way more information in a more flexible way.
> > > 
> > > So I'm not convinced it's a good idea to make the generic oom handling code
> > > more complicated and fragile for everybody, as well as making oom reports differ
> > > more between kernel versions and configurations.
> > 
> > Completely agreed! From my many years of experience of oom reports
> > analysing from production systems I would conclude the following categories
> > 	- clear runaways (and/or memory leaks)
> > 		- userspace consumers - either shmem or anonymous memory
> > 		  predominantly consumes the memory, swap is either depleted
> > 		  or not configured.
> > 		  OOM report is usually useful to pinpoint those as we
> > 		  have required counters available
> > 		- kernel memory consumers - if we are lucky they are
> > 		  using slab allocator and unreclaimable slab is a huge
> > 		  part of the memory consumption. If this is a page
> > 		  allocator user the oom repport only helps to deduce
> > 		  the fact by looking at how much user + slab + page
> > 		  table etc. form. But identifying the root cause is
> > 		  close to impossible without something like page_owner
> > 		  or a crash dump.
> > 	- misbehaving memory reclaim
> > 		- minority of issues and the oom report is usually
> > 		  insufficient to drill down to the root cause. If the
> > 		  problem is reproducible then collecting vmstat data
> > 		  can give a much better clue.
> > 		- high number of slab reclaimable objects or free swap
> > 		  are good indicators. Shrinkers data could be
> > 		  potentially helpful in the slab case but I really have
> > 		  hard time to remember any such situation.
> > On non-production systems the situation is quite different. I can see
> > how it could be very beneficial to add a very specific debugging data
> > for subsystem/shrinker which is developed and could cause the OOM. For
> > that purpose the proposed scheme is rather inflexible AFAICS.
> 
> Considering that you're an MM guy, and that shrinkers are pretty much
> universally used by _filesystem_ people - I'm not sure your experience
> is the most relevant here?

I really do not understand where you have concluded that. In those years
of analysis I was not debugging my _own_ code. I was dealing with
customer reports and I would not really blame them to specifically
trigger any class of OOM reports.
 
> The general attitude I've been seeing in this thread has been one of
> dismissiveness towards filesystem people. Roman too; back when he was
> working on his shrinker debug feature I reached out to him, explained
> that I was working on my own, and asked about collaborating - got
> crickets in response...

This is completely off and it makes me _really_ think whether
discussions like this on is really worth time. You have been presented
arguments, you seem to be convinced that every disagreement is against
you. Not the first time this is happening. Stop it please!

As a matter of fact, you are proposeing a very specific form of
debugging without showing that this is generally useful thing to do or
even giving us couple of examples where that was useful in a production
environment. This is where you should have started at and then we could
help out to form an acceptable solution. Throwing "this does what we
need, take it or leave" attitude is usualy not the best way to get your
work merged.
 
> Hmm..
> 
> Besides that, I haven't seen anything what-so-ever out of you guys to
> make our lives easier, regarding OOM debugging, nor do you guys even
> seem interested in the needs and perspectives of the filesytem people.
> Roman, your feature didn't help one bit for OOM debuging - didn't even
> come with documentation or hints as to what it's for.
> 
> BPF? Please.
> 
> Anyways.
> 
> Regarding log spam: that's something this patchset already starts to
> address. I don't think we needed to be dumping every single slab in the
> system, for ~2 pages worth of logs; hence this patchset changes that to
> just print the top 10.

Increasing the threshold for slabs to be printed is something I wouldn't
mind at all.
 
> The same approach is taken with shrinkers: more targeted, less spammy
> output.
> 
> So now that that concern has been addressed, perhaps some actual meat:
> 
> For one, the patchset adds tracking for when a shrinker was last asked
> to free something, vs. when it was actually freed. So right there, we
> can finally see at a glance when a shrinker has gotten stuck and which
> one.

The primary problem I have with this is how to decide whether to dump
shrinker data and/or which shrinkers to mention. How do you know that it
is the specific shrinker which has contributed to the OOM state?
Printing that data unconditionally will very likely be just additional
balast in most production situations. Sure if you are doing a filesystem
development and you are tuning your specific shrinker then this might be
a really important information to have. But then it is a debugging devel
tool rather than something we want or need to have in a generic oom
report.

All that being said, I am with you on the fact that the oom report in
its current form could see improvements. But please when adding more
information please always focus on general usefulness. We have a very
rich tracing capabilities which could be used for ad-hoc or very
specific purposes as it is much more flexible.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ