[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201707282215.AGI69210.VFOHQFtOFSOJML@I-love.SAKURA.ne.jp>
Date: Fri, 28 Jul 2017 22:15:01 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: mhocko@...nel.org
Cc: mjaggi@...iumnetworks.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: Possible race condition in oom-killer
Michal Hocko wrote:
> > 4578 is consuming memory as mlocked pages. But the OOM reaper cannot reclaim
> > mlocked pages (i.e. can_madv_dontneed_vma() returns false due to VM_LOCKED), can it?
>
> You are absolutely right. I am pretty sure I've checked mlocked counter
> as the first thing but that must be from one of the earlier oom reports.
> My fault I haven't checked it in the critical one
>
> [ 365.267347] oom_reaper: reaped process 4578 (oom02), now anon-rss:131559616kB, file-rss:0kB, shmem-rss:0kB
> [ 365.282658] oom_reaper: reaped process 4583 (oom02), now anon-rss:131561664kB, file-rss:0kB, shmem-rss:0kB
>
> and the above screemed about the fact I was just completely blind.
>
> mlock pages handling is on my todo list for quite some time already but
> I didn't get around it to implement that. mlock code is very tricky.
task_will_free_mem(current) in out_of_memory() returning false due to
MMF_OOM_SKIP already set allowed each thread sharing that mm to select a new
OOM victim. If task_will_free_mem(current) in out_of_memory() did not return
false, threads sharing MMF_OOM_SKIP mm would not have selected new victims
to the level where all OOM killable processes are killed and calls panic().
Powered by blists - more mailing lists