[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150326125348.GF15257@dhcp22.suse.cz>
Date: Thu, 26 Mar 2015 13:53:48 +0100
From: Michal Hocko <mhocko@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Huang Ying <ying.huang@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Dave Chinner <david@...morbit.com>,
Theodore Ts'o <tytso@....edu>
Subject: Re: [patch 04/12] mm: oom_kill: remove unnecessary locking in
exit_oom_victim()
On Wed 25-03-15 02:17:08, Johannes Weiner wrote:
> Disabling the OOM killer needs to exclude allocators from entering,
> not existing victims from exiting.
The idea was that exit_oom_victim doesn't miss a waiter.
exit_oom_victim is doing
atomic_dec_return(&oom_victims) && oom_killer_disabled)
so there is a full (implicit) memory barrier befor oom_killer_disabled
check. The other part is trickier. oom_killer_disable does:
oom_killer_disabled = true;
up_write(&oom_sem);
wait_event(oom_victims_wait, !atomic_read(&oom_victims));
up_write doesn't guarantee a full memory barrier AFAICS in
Documentation/memory-barriers.txt (although the generic and x86
implementations seem to implement it as a full barrier) but wait_event
implies the full memory barrier (prepare_to_wait_event does spin
lock&unlock) before checking the condition in the slow path. This should
be sufficient and docummented...
/*
* We do not need to hold oom_sem here because oom_killer_disable
* guarantees that oom_killer_disabled chage is visible before
* the waiter is put into sleep (prepare_to_wait_event) so
* we cannot miss a wake up.
*/
in unmark_oom_victim()
> Right now the only waiter is suspend code, which achieves quiescence
> by disabling the OOM killer. But later on we want to add waits that
> hold the lock instead to stop new victims from showing up.
It is not entirely clear what you mean by this from the current context.
exit_oom_victim is not called from any context which would be locked by
any OOM internals so it should be safe to use the locking.
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
I have nothing against the change as it seems correct but it would be
good to get a better clarification and also document the implicit memory
barriers.
Acked-by: Michal Hocko <mhocko@...e.cz>
> ---
> mm/oom_kill.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 4b9547be9170..88aa9ba40fa5 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -437,14 +437,12 @@ void exit_oom_victim(void)
> {
> clear_thread_flag(TIF_MEMDIE);
>
> - down_read(&oom_sem);
> /*
> * There is no need to signal the lasst oom_victim if there
> * is nobody who cares.
> */
> if (!atomic_dec_return(&oom_victims) && oom_killer_disabled)
> wake_up_all(&oom_victims_wait);
> - up_read(&oom_sem);
> }
>
> /**
> --
> 2.3.3
>
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists