lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 26 Sep 2013 11:59:50 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	rientjes@...gle.com
Cc:	akpm@...ux-foundation.org, oleg@...hat.com, security@...nel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] kthread: Make kthread_create() killable.

Thank you for comments, David.

David Rientjes wrote:
> Also results in a livelock if you're running in a memcg and have hit its
> limit.

> wait_for_completion() is scary if that completion requires memory that 
> cannot be allocated because the caller is killed but uninterruptible.

I don't think these lines are specific to wait_for_completion() users.

Currently the OOM killer is disabled throughout from "the moment the OOM killer
chose a process to kill" to "the moment the task_struct of the chosen process
becomes unreachable". Any blocking functions which wait in TASK_UNINTERRUPTIBLE
(e.g. mutex_lock()) can disable the OOM killer if the current thread is chosen
by the OOM killer. Therefore, any users of blocking functions which wait in
TASK_UNINTERRUPTIBLE are considered scary if they assume that the current
thread will not be chosen by the OOM killer.

But it seems to me that re-enabling the OOM killer at some point is more
realizable than purging all such users.

To re-enable the OOM killer at some point, the OOM killer needs to choose more
processes if the to-be-killed process cannot be terminated within an adequate
period.

For example, add "unsigned long memdie_stamp;" to "struct task_struct" and do
"p->memdie_stamp = jiffies + 5 * HZ;" before "set_tsk_thread_flag(p, TIF_MEMDIE);"
and do

 	if (test_tsk_thread_flag(task, TIF_MEMDIE)) {
 		if (unlikely(frozen(task)))
 			__thaw_task(task);
+		/* Choose more processes if the chosen process cannot die. */
+		if (time_after(jiffies, p->memdie_stamp) &&
+		    task->state == TASK_UNINTERRUPTIBLE)
+		        return OOM_SCAN_CONTINUE;
 		if (!force_kill)
 			return OOM_SCAN_ABORT;
 	}

in oom_scan_process_thread().

This idea costs us the increment of the possibility of different side effects
(e.g. the second-worst process is chosen by the OOM killer when the worst
process cannot be terminated => memory allocation for writeback fails because
the second-worst process was in the ext3's writeback path => fs-error action
(remount read-only or panic) gets triggered by the second-worst process).



Anyway, this patch is for helping the OOM killer to kill the process smoothly
when the chosen process is waiting at kthread_create(). I attach updated patch
description. Did I merge your comments appropriately?
----------
[PATCH v3] kthread: Make kthread_create() killable.

Any user process callers of wait_for_completion() except global init process
might be chosen by the OOM killer while waiting for completion() call by some
other process which does memory allocation.

When such users are chosen by the OOM killer when they are waiting for
completion() in TASK_UNINTERRUPTIBLE, the system will be kept stressed
due to memory starvation because the OOM killer cannot kill such users.

kthread_create() is one of such users and this patch fixes the problem for
kthreadd by making kthread_create() killable.

Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: Oleg Nesterov <oleg@...hat.com>
Acked-by: David Rientjes <rientjes@...gle.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ