lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201604072055.GAI52128.tHLVOFJOQMFOFS@I-love.SAKURA.ne.jp>
Date:	Thu, 7 Apr 2016 20:55:34 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	mhocko@...nel.org, linux-mm@...ck.org
Cc:	rientjes@...gle.com, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, mhocko@...e.com
Subject: Re: [PATCH 3/3] mm, oom_reaper: clear TIF_MEMDIE for all tasks queued for oom_reaper

Michal Hocko wrote:
> The first obvious one is when the oom victim clears its mm and gets
> stuck later on. oom_reaper would back of on find_lock_task_mm returning
> NULL. We can safely try to clear TIF_MEMDIE in this case because such a
> task would be ignored by the oom killer anyway. The flag would be
> cleared by that time already most of the time anyway.

I didn't understand what this wants to tell. The OOM victim will clear
TIF_MEMDIE as soon as it sets current->mm = NULL. Even if the oom victim
clears its mm and gets stuck later on (e.g. at exit_task_work()),
TIF_MEMDIE was already cleared by that moment by the OOM victim.

> 
> The less obvious one is when the oom reaper fails due to mmap_sem
> contention. Even if we clear TIF_MEMDIE for this task then it is not
> very likely that we would select another task too easily because
> we haven't reaped the last victim and so it would be still the #1
> candidate. There is a rare race condition possible when the current
> victim terminates before the next select_bad_process but considering
> that oom_reap_task had retried several times before giving up then
> this sounds like a borderline thing.

Is it helpful? Allowing the OOM killer to select the same thread again
simply makes the kernel log buffer flooded with the OOM kill messages.

I think we should not allow the OOM killer to select the same thread again
by e.g. doing tsk->signal->oom_score_adj = OOM_SCORE_ADJ_MIN regardless of
whether reaping that thread's memory succeeded or not.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ