[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160128235110.GA5805@cmpxchg.org>
Date: Thu, 28 Jan 2016 18:51:10 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Michal Hocko <mhocko@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Hillf Danton <hillf.zj@...baba-inc.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH 4/3] mm, oom: drop the last allocation attempt before
out_of_memory
On Thu, Jan 28, 2016 at 03:19:08PM -0800, David Rientjes wrote:
> On Thu, 28 Jan 2016, Johannes Weiner wrote:
>
> > The check has to happen while holding the OOM lock, otherwise we'll
> > end up killing much more than necessary when there are many racing
> > allocations.
> >
>
> Right, we need to try with ALLOC_WMARK_HIGH after oom_lock has been
> acquired.
>
> The situation is still somewhat fragile, however, but I think it's
> tangential to this patch series. If the ALLOC_WMARK_HIGH allocation fails
> because an oom victim hasn't freed its memory yet, and then the TIF_MEMDIE
> thread isn't visible during the oom killer's tasklist scan because it has
> exited, we still end up killing more than we should. The likelihood of
> this happening grows with the length of the tasklist.
>
> Perhaps we should try testing watermarks after a victim has been selected
> and immediately before killing? (Aside: we actually carry an internal
> patch to test mem_cgroup_margin() in the memcg oom path after selecting a
> victim because we have been hit with this before in the memcg path.)
>
> I would think that retrying with ALLOC_WMARK_HIGH would be enough memory
> to deem that we aren't going to immediately reenter an oom condition so
> the deferred killing is a waste of time.
>
> The downside is how sloppy this would be because it's blurring the line
> between oom killer and page allocator. We'd need the oom killer to return
> the selected victim to the page allocator, try the allocation, and then
> call oom_kill_process() if necessary.
https://lkml.org/lkml/2015/3/25/40
We could have out_of_memory() wait until the number of outstanding OOM
victims drops to 0. Then __alloc_pages_may_oom() doesn't relinquish
the lock until its kill has been finalized:
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 914451a..4dc5b9d 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -892,7 +892,9 @@ bool out_of_memory(struct oom_control *oc)
* Give the killed process a good chance to exit before trying
* to allocate memory again.
*/
- schedule_timeout_killable(1);
+ if (!test_thread_flag(TIF_MEMDIE))
+ wait_event_timeout(oom_victims_wait,
+ !atomic_read(&oom_victims), HZ);
}
return true;
}
Powered by blists - more mailing lists