[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkrgtbR1o3pTSh_hqPhrkugXBnB4uwdHh+uK6Ndp-u_fEw@mail.gmail.com>
Date: Fri, 26 Feb 2021 11:19:51 -0800
From: Yang Shi <shy828301@...il.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Roman Gushchin <guro@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jonathan Corbet <corbet@....net>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] doc: memcontrol: add description for oom_kill
On Fri, Feb 26, 2021 at 8:42 AM Yang Shi <shy828301@...il.com> wrote:
>
> On Thu, Feb 25, 2021 at 11:30 PM Michal Hocko <mhocko@...e.com> wrote:
> >
> > On Thu 25-02-21 18:12:54, Yang Shi wrote:
> > > When debugging an oom issue, I found the oom_kill counter of memcg is
> > > confusing. At the first glance without checking document, I thought it
> > > just counts for memcg oom, but it turns out it counts both global and
> > > memcg oom.
> >
> > Yes, this is the case indeed. The point of the counter was to count oom
> > victims from the memcg rather than matching that to the source of the
> > oom. Rememeber that this could have been a memcg oom up in the
> > hierarchy as well. Counting victims on the oom origin could be equally
>
> Yes, it is updated hierarchically on v2, but not on v1. I'm supposed
> this is because v1 may work in non-hierarchcal mode? If this is the
> only reason we may be able to remove this to get aligned with v2 since
> non-hierarchal mode is no longer supported.
BTW, having the counter recorded hierarchically may help out one of
our usecases. We want to monitor the oom_kill for some services, but
systemd would wipe out the cgroup if the service is oom killed then
restart the service from scratch (it means create a brand new cgroup
with the same name). So this systemd behavior makes the counter
useless if it is not recorded hierarchically.
>
> > confusing because in many cases there would be no victim counted for the
> > above mentioned memcg ooms.
> >
> > > The cgroup v2 documents it, but the description is missed for cgroup v1.
> > >
> > > Signed-off-by: Yang Shi <shy828301@...il.com>
> >
> > Acked-by: Michal Hocko <mhocko@...e.com>
> >
> > > ---
> > > Documentation/admin-guide/cgroup-v1/memory.rst | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
> > > index 0936412e044e..44d5429636e2 100644
> > > --- a/Documentation/admin-guide/cgroup-v1/memory.rst
> > > +++ b/Documentation/admin-guide/cgroup-v1/memory.rst
> > > @@ -851,6 +851,9 @@ At reading, current status of OOM is shown.
> > > (if 1, oom-killer is disabled)
> > > - under_oom 0 or 1
> > > (if 1, the memory cgroup is under OOM, tasks may be stopped.)
> > > + - oom_kill integer counter
> > > + The number of processes belonging to this cgroup killed by any
> > > + kind of OOM killer.
> > >
> > > 11. Memory Pressure
> > > ===================
> > > --
> > > 2.26.2
> > >
> >
> > --
> > Michal Hocko
> > SUSE Labs
Powered by blists - more mailing lists