[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YL8MjSteKeO7w0il@dhcp22.suse.cz>
Date: Tue, 8 Jun 2021 08:22:05 +0200
From: Michal Hocko <mhocko@...e.com>
To: Waiman Long <llong@...hat.com>
Cc: Shakeel Butt <shakeelb@...gle.com>,
Aaron Tomlin <atomlin@...hat.com>,
Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] mm/oom_kill: allow oom kill allocating task for
non-global case
On Mon 07-06-21 16:44:09, Waiman Long wrote:
> On 6/7/21 4:03 PM, Michal Hocko wrote:
> > On Mon 07-06-21 21:36:47, Michal Hocko wrote:
> > > On Mon 07-06-21 15:18:38, Waiman Long wrote:
> > [...]
> > > > A partial OOM report below:
> > > Do you happen to have the full report?
> > >
> > > > [ 8221.433608] memory: usage 21280kB, limit 204800kB, failcnt 49116
> > > > :
> > > > [ 8227.239769] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
> > > > [ 8227.242495] [1611298] 0 1611298 35869 635 167936 0 -1000 conmon
> > > > [ 8227.242518] [1702509] 0 1702509 35869 701 176128 0 -1000 conmon
> > > > [ 8227.242522] [1703345] 1001050000 1703294 183440 0 2125824 0 999 node
> > > > [ 8227.242706] Out of memory and no killable processes...
> > > > [ 8227.242731] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999
> > Btw it is surprising to not see _GFP_ACCOUNT here.
>
> There are a number of OOM kills in the kernel log and none of the tasks that
> invoke oom-killer has _GFP_ACCOUNT flag set.
OK. A full report (including the backtrace) would tell us more what is
the source of the charge. I thought that most #PF charging paths use the
same gfp mask as the allocation (which would include other flags on top
of GFP_KERNEL) but it seems we just use GFP_KERNEL at many places. There
are also some direct callers of the charging API for kernel allocations.
Not that this would be super important but it caught my attention.
You are saying that there are other OOM kills going on. Are they all for
the same memcg? Is it possible the only eligible task has been killed
and oom reaped already?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists