[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190828070845.GC7386@dhcp22.suse.cz>
Date: Wed, 28 Aug 2019 09:08:45 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Edward Chron <echron@...sta.com>
Cc: Qian Cai <cai@....pw>, Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>,
Johannes Weiner <hannes@...xchg.org>,
David Rientjes <rientjes@...gle.com>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Shakeel Butt <shakeelb@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Ivan Delalande <colona@...sta.com>
Subject: Re: [PATCH 00/10] OOM Debug print selection and additional
information
On Tue 27-08-19 19:47:22, Edward Chron wrote:
> On Tue, Aug 27, 2019 at 6:32 PM Qian Cai <cai@....pw> wrote:
> >
> >
> >
> > > On Aug 27, 2019, at 9:13 PM, Edward Chron <echron@...sta.com> wrote:
> > >
> > > On Tue, Aug 27, 2019 at 5:50 PM Qian Cai <cai@....pw> wrote:
> > >>
> > >>
> > >>
> > >>> On Aug 27, 2019, at 8:23 PM, Edward Chron <echron@...sta.com> wrote:
> > >>>
> > >>>
> > >>>
> > >>> On Tue, Aug 27, 2019 at 5:40 AM Qian Cai <cai@....pw> wrote:
> > >>> On Mon, 2019-08-26 at 12:36 -0700, Edward Chron wrote:
> > >>>> This patch series provides code that works as a debug option through
> > >>>> debugfs to provide additional controls to limit how much information
> > >>>> gets printed when an OOM event occurs and or optionally print additional
> > >>>> information about slab usage, vmalloc allocations, user process memory
> > >>>> usage, the number of processes / tasks and some summary information
> > >>>> about these tasks (number runable, i/o wait), system information
> > >>>> (#CPUs, Kernel Version and other useful state of the system),
> > >>>> ARP and ND Cache entry information.
> > >>>>
> > >>>> Linux OOM can optionally provide a lot of information, what's missing?
> > >>>> ----------------------------------------------------------------------
> > >>>> Linux provides a variety of detailed information when an OOM event occurs
> > >>>> but has limited options to control how much output is produced. The
> > >>>> system related information is produced unconditionally and limited per
> > >>>> user process information is produced as a default enabled option. The
> > >>>> per user process information may be disabled.
> > >>>>
> > >>>> Slab usage information was recently added and is output only if slab
> > >>>> usage exceeds user memory usage.
> > >>>>
> > >>>> Many OOM events are due to user application memory usage sometimes in
> > >>>> combination with the use of kernel resource usage that exceeds what is
> > >>>> expected memory usage. Detailed information about how memory was being
> > >>>> used when the event occurred may be required to identify the root cause
> > >>>> of the OOM event.
> > >>>>
> > >>>> However, some environments are very large and printing all of the
> > >>>> information about processes, slabs and or vmalloc allocations may
> > >>>> not be feasible. For other environments printing as much information
> > >>>> about these as possible may be needed to root cause OOM events.
> > >>>>
> > >>>
> > >>> For more in-depth analysis of OOM events, people could use kdump to save a
> > >>> vmcore by setting "panic_on_oom", and then use the crash utility to analysis the
> > >>> vmcore which contains pretty much all the information you need.
> > >>>
> > >>> Certainly, this is the ideal. A full system dump would give you the maximum amount of
> > >>> information.
> > >>>
> > >>> Unfortunately some environments may lack space to store the dump,
> > >>
> > >> Kdump usually also support dumping to a remote target via NFS, SSH etc
> > >>
> > >>> let alone the time to dump the storage contents and restart the system. Some
> > >>
> > >> There is also “makedumpfile” that could compress and filter unwanted memory to reduce
> > >> the vmcore size and speed up the dumping process by utilizing multi-threads.
> > >>
> > >>> systems can take many minutes to fully boot up, to reset and reinitialize all the
> > >>> devices. So unfortunately this is not always an option, and we need an OOM Report.
> > >>
> > >> I am not sure how the system needs some minutes to reboot would be relevant for the
> > >> discussion here. The idea is to save a vmcore and it can be analyzed offline even on
> > >> another system as long as it having a matching “vmlinux.".
> > >>
> > >>
> > >
> > > If selecting a dump on an OOM event doesn't reboot the system and if
> > > it runs fast enough such
> > > that it doesn't slow processing enough to appreciably effect the
> > > system's responsiveness then
> > > then it would be ideal solution. For some it would be over kill but
> > > since it is an option it is a
> > > choice to consider or not.
> >
> > It sounds like you are looking for more of this,
>
> If you want to supplement the OOM Report and keep the information
> together than you could use EBPF to do that. If that really is the
> preference it might make sense to put the entire report as an EBPF
> script than you can modify the script however you choose. That would
> be very flexible. You can change your configuration on the fly. As
> long as it has access to everything you need it should work.
>
> Michal would know what direction OOM is headed and if he thinks that fits with
> where things are headed.
It seems we have landed in the similar thinking here. As mentioned in my
earlier email in this thread I can see the extensibility to be achieved
by eBPF. Essentially we would have a base form of the oom report like
now and scripts would then hook in there to provide whatever a specific
usecase needs. My practical experience with eBPF is close to zero so I
have no idea how that would actually work out though.
[...]
> For production systems installing and updating EBPF scripts may someday
> be very common, but I wonder how data center managers feel about it now?
> Developers are very excited about it and it is a very powerful tool but can I
> get permission to add or replace an existing EBPF on production systems?
I am not sure I understand. There must be somebody trusted to take care
of systems, right?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists