[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150326151050.GB23973@cmpxchg.org>
Date: Thu, 26 Mar 2015 11:10:50 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Michal Hocko <mhocko@...e.cz>
Cc: linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Huang Ying <ying.huang@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Dave Chinner <david@...morbit.com>,
Theodore Ts'o <tytso@....edu>
Subject: Re: [patch 04/12] mm: oom_kill: remove unnecessary locking in
exit_oom_victim()
On Thu, Mar 26, 2015 at 02:01:06PM +0100, Michal Hocko wrote:
> On Thu 26-03-15 13:53:48, Michal Hocko wrote:
> > On Wed 25-03-15 02:17:08, Johannes Weiner wrote:
> > > Disabling the OOM killer needs to exclude allocators from entering,
> > > not existing victims from exiting.
> >
> > The idea was that exit_oom_victim doesn't miss a waiter.
> >
> > exit_oom_victim is doing
> > atomic_dec_return(&oom_victims) && oom_killer_disabled)
> >
> > so there is a full (implicit) memory barrier befor oom_killer_disabled
> > check. The other part is trickier. oom_killer_disable does:
> > oom_killer_disabled = true;
> > up_write(&oom_sem);
> >
> > wait_event(oom_victims_wait, !atomic_read(&oom_victims));
> >
> > up_write doesn't guarantee a full memory barrier AFAICS in
> > Documentation/memory-barriers.txt (although the generic and x86
> > implementations seem to implement it as a full barrier) but wait_event
> > implies the full memory barrier (prepare_to_wait_event does spin
> > lock&unlock) before checking the condition in the slow path. This should
> > be sufficient and docummented...
> >
> > /*
> > * We do not need to hold oom_sem here because oom_killer_disable
> > * guarantees that oom_killer_disabled chage is visible before
> > * the waiter is put into sleep (prepare_to_wait_event) so
> > * we cannot miss a wake up.
> > */
> >
> > in unmark_oom_victim()
>
> OK, I can see that the next patch removes oom_killer_disabled
> completely. The dependency won't be there and so the concerns about the
> memory barriers.
>
> Is there any reason why the ordering is done this way? It would sound
> more logical to me.
I honestly didn't even think about the dependency between the lock and
this check. They both looked unnecessary to me and I stopped putting
any more thought into it once I had convinced myself that they are.
The order was chosen because the waitqueue generalization seemed like
a bigger deal. One is just an unnecessary lock, but this extra check
cost me quite some time debugging and seems like a much more harmful
piece of code to fix. It's no problem to reorder the patches, though.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists