lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150326130106.GG15257@dhcp22.suse.cz>
Date:	Thu, 26 Mar 2015 14:01:06 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
	Huang Ying <ying.huang@...el.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Dave Chinner <david@...morbit.com>,
	Theodore Ts'o <tytso@....edu>
Subject: Re: [patch 04/12] mm: oom_kill: remove unnecessary locking in
 exit_oom_victim()

On Thu 26-03-15 13:53:48, Michal Hocko wrote:
> On Wed 25-03-15 02:17:08, Johannes Weiner wrote:
> > Disabling the OOM killer needs to exclude allocators from entering,
> > not existing victims from exiting.
> 
> The idea was that exit_oom_victim doesn't miss a waiter.
> 
> exit_oom_victim is doing
> 	atomic_dec_return(&oom_victims) && oom_killer_disabled)
> 
> so there is a full (implicit) memory barrier befor oom_killer_disabled
> check. The other part is trickier. oom_killer_disable does:
> 	oom_killer_disabled = true;
>         up_write(&oom_sem);
> 
>         wait_event(oom_victims_wait, !atomic_read(&oom_victims));
> 
> up_write doesn't guarantee a full memory barrier AFAICS in
> Documentation/memory-barriers.txt (although the generic and x86
> implementations seem to implement it as a full barrier) but wait_event
> implies the full memory barrier (prepare_to_wait_event does spin
> lock&unlock) before checking the condition in the slow path. This should
> be sufficient and docummented...
> 
> 	/*
> 	 * We do not need to hold oom_sem here because oom_killer_disable
> 	 * guarantees that oom_killer_disabled chage is visible before
> 	 * the waiter is put into sleep (prepare_to_wait_event) so
> 	 * we cannot miss a wake up.
> 	 */
> 
> in unmark_oom_victim()

OK, I can see that the next patch removes oom_killer_disabled
completely. The dependency won't be there and so the concerns about the
memory barriers.

Is there any reason why the ordering is done this way? It would sound
more logical to me.
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ