[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150326160742.GR15257@dhcp22.suse.cz>
Date: Thu, 26 Mar 2015 17:07:42 +0100
From: Michal Hocko <mhocko@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Huang Ying <ying.huang@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Dave Chinner <david@...morbit.com>,
Theodore Ts'o <tytso@....edu>
Subject: Re: [patch 06/12] mm: oom_kill: simplify OOM killer locking
On Thu 26-03-15 11:17:46, Johannes Weiner wrote:
> On Thu, Mar 26, 2015 at 02:31:11PM +0100, Michal Hocko wrote:
[...]
> > > @@ -795,27 +728,21 @@ bool out_of_memory(struct zonelist *zonelist, gfp_t gfp_mask,
> > > */
> > > void pagefault_out_of_memory(void)
> > > {
> > > - struct zonelist *zonelist;
> > > -
> > > - down_read(&oom_sem);
> > > if (mem_cgroup_oom_synchronize(true))
> > > - goto unlock;
> > > + return;
> >
> > OK, so we are back to what David has asked previously. We do not need
> > the lock for memcg and oom_killer_disabled because we know that no tasks
> > (except for potential oom victim) are lurking around at the time
> > oom_killer_disable() is called. So I guess we want to stick a comment
> > into mem_cgroup_oom_synchronize before we check for oom_killer_disabled.
>
> I would prefer everybody that sets TIF_MEMDIE and kills a task to hold
> the lock, including memcg. Simplicity is one thing, but also a global
> OOM kill might not even be necessary when it's racing with the memcg.
sure I am find with that.
> > After those are fixed, feel free to add
> > Acked-by: Michal Hocko <mhocko@...e.cz>
>
> Thanks.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists