[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171031141034.bg25xbo5cyfafnyp@dhcp22.suse.cz>
Date: Tue, 31 Oct 2017 15:10:34 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, rientjes@...gle.com,
hannes@...xchg.org, mjaggi@...iumnetworks.com, mgorman@...e.de,
oleg@...hat.com, vdavydov.dev@...il.com, vbabka@...e.cz
Subject: Re: [PATCH] mm,oom: Try last second allocation before and after
selecting an OOM victim.
On Tue 31-10-17 22:51:49, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 31-10-17 22:13:05, Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > On Tue 31-10-17 21:42:23, Tetsuo Handa wrote:
> > > > > > While both have some merit, the first reason is mostly historical
> > > > > > because we have the explicit locking now and it is really unlikely that
> > > > > > the memory would be available right after we have given up trying.
> > > > > > Last attempt allocation makes some sense of course but considering that
> > > > > > the oom victim selection is quite an expensive operation which can take
> > > > > > a considerable amount of time it makes much more sense to retry the
> > > > > > allocation after the most expensive part rather than before. Therefore
> > > > > > move the last attempt right before we are trying to kill an oom victim
> > > > > > to rule potential races when somebody could have freed a lot of memory
> > > > > > in the meantime. This will reduce the time window for potentially
> > > > > > pre-mature OOM killing considerably.
> > > > >
> > > > > But this is about "doing last second allocation attempt after selecting
> > > > > an OOM victim". This is not about "allowing OOM victims to try ALLOC_OOM
> > > > > before selecting next OOM victim" which is the actual problem I'm trying
> > > > > to deal with.
> > > >
> > > > then split it into two. First make the general case and then add a more
> > > > sophisticated on top. Dealing with multiple issues at once is what makes
> > > > all those brain cells suffer.
> > >
> > > I'm failing to understand. I was dealing with single issue at once.
> > > The single issue is "MMF_OOM_SKIP prematurely prevents OOM victims from trying
> > > ALLOC_OOM before selecting next OOM victims". Then, what are the general case and
> > > a more sophisticated? I wonder what other than "MMF_OOM_SKIP should allow OOM
> > > victims to try ALLOC_OOM for once before selecting next OOM victims" can exist...
> >
> > Try to think little bit out of your very specific and borderline usecase
> > and it will become obvious. ALLOC_OOM is a trivial update on top of
> > moving get_page_from_freelist to oom_kill_process which is a more
> > generic race window reducer.
>
> So, you meant "doing last second allocation attempt after selecting an OOM victim"
> as the general case and "using ALLOC_OOM at last second allocation attempt" as a
> more sophisticated. Then, you won't object conditionally switching ALLOC_WMARK_HIGH
> and ALLOC_OOM for last second allocation attempt, will you?
yes for oom_victims
> But doing ALLOC_OOM for last second allocation attempt from out_of_memory() involve
> duplicating code (e.g. rebuilding zone list).
Why would you do it? Do not blindly copy and paste code without
a good reason. What kind of problem does this actually solve?
> What is your preferred approach?
> Duplicate relevant code? Use get_page_from_freelist() without rebuilding the zone list?
> Use __alloc_pages_nodemask() ?
Just do what we do now with ALLOC_WMARK_HIGH and in a separate patch use
ALLOC_OOM for oom victims. There shouldn't be any reasons to play
additional tricks here.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists