lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151201132927.GG4567@dhcp22.suse.cz>
Date:	Tue, 1 Dec 2015 14:29:28 +0100
From:	Michal Hocko <mhocko@...nel.org>
To:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:	linux-mm@...ck.org, akpm@...ux-foundation.org,
	torvalds@...ux-foundation.org, mgorman@...e.de,
	rientjes@...gle.com, riel@...hat.com, hughd@...gle.com,
	oleg@...hat.com, andrea@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH -v2] mm, oom: introduce oom reaper

On Sun 29-11-15 01:10:10, Tetsuo Handa wrote:
> Tetsuo Handa wrote:
> > > Users of mmap_sem which need it for write should be carefully reviewed
> > > to use _killable waiting as much as possible and reduce allocations
> > > requests done with the lock held to absolute minimum to reduce the risk
> > > even further.
> > 
> > It will be nice if we can have down_write_killable()/down_read_killable().
> 
> It will be nice if we can also have __GFP_KILLABLE.

Well, we already do this implicitly because OOM killer will
automatically do mark_oom_victim if it has fatal_signal_pending and then
__alloc_pages_slowpath fails the allocation if the memory reserves do
not help to finish the allocation.

> Although currently it can't
> be perfect because reclaim functions called from __alloc_pages_slowpath() use
> unkillable waits, starting from just bail out as with __GFP_NORETRY when
> fatal_signal_pending(current) is true will be helpful.
> 
> So far I'm hitting no problem with testers except the one using mmap()/munmap().
> 
> I think that cmpxchg() was not needed.

It is not needed right now but I would rather not depend on the oom
mutex here. This is not a hot path where an atomic would add an
overhead.

> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index c2ab7f9..1a65739 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -483,8 +483,6 @@ static int oom_reaper(void *unused)
>  
>  static void wake_oom_reaper(struct mm_struct *mm)
>  {
> -	struct mm_struct *old_mm;
> -
>  	if (!oom_reaper_th)
>  		return;
>  
> @@ -492,14 +490,15 @@ static void wake_oom_reaper(struct mm_struct *mm)
>  	 * Make sure that only a single mm is ever queued for the reaper
>  	 * because multiple are not necessary and the operation might be
>  	 * disruptive so better reduce it to the bare minimum.
> +	 * Caller is serialized by oom_lock mutex.
>  	 */
> -	old_mm = cmpxchg(&mm_to_reap, NULL, mm);
> -	if (!old_mm) {
> +	if (!mm_to_reap) {
>  		/*
>  		 * Pin the given mm. Use mm_count instead of mm_users because
>  		 * we do not want to delay the address space tear down.
>  		 */
>  		atomic_inc(&mm->mm_count);
> +		mm_to_reap = mm;
>  		wake_up(&oom_reaper_wait);
>  	}
>  }

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ