lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Jul 2018 11:00:55 -0500
From:   ebiederm@...ssion.com (Eric W. Biederman)
To:     Oleg Nesterov <oleg@...hat.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [Bug 200447] infinite loop in fork syscall

Oleg Nesterov <oleg@...hat.com> writes:

> On 07/09, Linus Torvalds wrote:
>>
>> But the patch was written for testing and feedback more than anything
>> else. Comments?
>
> see my reply on bugzilla. Can't we add lkml?
Done.

> Perhaps we can do another change? Not sure it is actually better, but I think
> it is always good to discuss the alternatives.
>
> 1. Once again, we turn "int group" argument into thread/process/pgid enum, and
>    change kill_pgrp/tty_signal_session_leader to pass "group = pgid", imo this
>    makes sense in any case.

I agree.  There are a lot more multi-process signals cases than handled
in Linus's patch.   I believe this will clean that up.

> 2. To simplify, lets suppose we add the new PF_INFORK flag. Yes, this is bad,
>    we can do better. I think we can simply add "struct hlist_head forking_threads"
>    into signal_struct, so complete_signal() can just do hlist_for_each_entry()
>    rather than for_each_thread() + PF_INFORK check. We don't even need a new
>    member in task_struct.

We still need the distinction between multi-process signals and single
process signals (which is the hard part).  For good performance of
signal delivery to multi-threaded tasks we still need a new member in
signal_struct.  Plus it is a bit more work to update the list or even
walk the list than a sequence counter.

So I think adding a sequence counter to let us know about multiprocess
signals is the local optimum.

If we ever need to remove the restart case entirely from fork and queue
all new multi-process signals for the newly created task.  Going through
the list of forking

>
> 3. copy_process() can simply block/unblock all signals (except KILL/STOP), see
>    the "patch" below.

All signals are effectively blocked for the duration of the fork for the
calling task.    Where we get into trouble and where we need a fix for
correctness is that another thread can dequeue the signal.   Blocking
signals of the forking task does not change that.

I think that reveals another bug in our current logic.  For blocked
multi-process signals we don't ensure they get delivered to both the
parent and the child if the signal logically comes in after the fork.

Multi-threaded b0rked ness and blocked signal b0rkenness, plus periodic
timers making for take forever for no good reason.  This business of
ensuring a signal is logically delvered before or after a fork is
tricky.

Eric


> diff --git a/kernel/fork.c b/kernel/fork.c
> index 9440d61..652ef09 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1591,6 +1591,23 @@ static __latent_entropy struct task_struct *copy_process(
>  	int retval;
>  	struct task_struct *p;
>  
> +	spin_lock_irq(&current->sighand->siglock);
> +	recalc_sigpending_tsk(current);
> +	if (signal_pending(current)) {
> +		retval = -ERESTARTNOINTR;
> +	} else {
> +		// Yes, yes, this is wrong, just to explain the idea.
> +		// We should not block SIGKILL, we need to clear PF_INFORK
> +		// and restore ->blocked in error paths, we need helper(s).
> +		retval = 0;
> +		current->flags |= PF_INFORK;
> +		current->saved_sigmask = current->blocked;
> +		sigfillset(&current->blocked);
> +	}
> +	spin_unlock_irq(&current->sighand->siglock);
> +	if (retval)
> +		return retval;
> +
>  	/*
>  	 * Don't allow sharing the root directory with processes in a different
>  	 * namespace
> @@ -1918,8 +1935,14 @@ static __latent_entropy struct task_struct *copy_process(
>  	 * A fatal signal pending means that current will exit, so the new
>  	 * thread can't slip out of an OOM kill (or normal SIGKILL).
>  	*/
> -	recalc_sigpending();
> -	if (signal_pending(current)) {
> +	// check signal_pending() before __set_task_blocked() which does
> +	// recalc_sigpending(). Again, this is just to explain the idea,
> +	// __set_task_blocked() is not exported, it is suboptimal, we can
> +	// do better.
> +	bool eintr = signal_pending();
> +	current->flags &= ~PF_INFORK;
> +	__set_task_blocked(current, &current->saved_sigmask);
> +	if (eintr) {
>  		retval = -ERESTARTNOINTR;
>  		goto bad_fork_cancel_cgroup;
>  	}
> diff --git a/kernel/signal.c b/kernel/signal.c
> index 8d8a940..3433e66 100644
> --- a/kernel/signal.c
> +++ b/kernel/signal.c
> @@ -900,6 +900,14 @@ static void complete_signal(int sig, struct task_struct *p, int group)
>  	struct signal_struct *signal = p->signal;
>  	struct task_struct *t;
>  
> +	// kill_pgrp/etc.
> +	if (group == 2) {
> +		for_each_thread(p, t) {
> +			if (p->flags & PF_INFORK && !sigismember(&p->saved_sigmask, sig))
> +				signal_wake_up(t, 0);
> +		}
> +	}
> +
>  	/*
>  	 * Now find a thread we can wake up to take the signal off the queue.
>  	 *

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ