lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 29 Jul 2022 07:50:34 -0600
From:   Tycho Andersen <tycho@...ho.pizza>
To:     "Eric W. Biederman" <ebiederm@...ssion.com>
Cc:     Oleg Nesterov <oleg@...hat.com>,
        "Serge E. Hallyn" <serge@...lyn.com>,
        Miklos Szeredi <miklos@...redi.hu>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: __fatal_signal_pending() should also check
 PF_EXITING

On Fri, Jul 29, 2022 at 12:04:17AM -0500, Eric W. Biederman wrote:
> Tycho Andersen <tycho@...ho.pizza> writes:
> 
> > On Thu, Jul 28, 2022 at 11:12:20AM +0200, Oleg Nesterov wrote:
> >> This is clear, but it seems you do not understand me. Let me try again
> >> to explain and please correct me if I am wrong.
> >> 
> >> To simplify, lets suppose we have a single-thread task T which simply
> >> does
> >> 	__set_current_state(TASK_KILLABLE);
> >> 	schedule();
> >> 
> >> in the do_exit() paths after exit_signals() which sets PF_EXITING. Btw,
> >> note that it even documents that this thread is not "visible" for the
> >> group-wide signals, see below.
> >> 
> >> Now, suppose that this task is running and you send SIGKILL. T will
> >> dequeue SIGKILL from T->penging and call do_exit(). However, it won't
> >> remove SIGKILL from T->signal.shared_pending(), and this means that
> >> signal_pending(T) is still true.
> >> 
> >> Now. If we add a PF_EXITING or sigismember(shared_pending, SIGKILL) check
> >> into __fatal_signal_pending(), then yes, T won't block in schedule(),
> >> schedule()->signal_pending_state() will return true.
> >> 
> >> But what if T exits on its own? It will block in schedule() forever.
> >> schedule()->signal_pending_state() will not even check __fatal_signal_pending(),
> >> signal_pending() == F.
> >> 
> >> Now if you send SIGKILL to this task, SIGKILL won't wake it up or even
> >> set TIF_SIGPENDING, complete_signal() will do nothing.
> >> 
> >> See?
> >> 
> >> I agree, we should probably cleanup this logic and define how exactly
> >> the exiting task should react to signals (not only fatal signals). But
> >> your patch certainly doesn't look good to me and it is not enough.
> >> May be we can change get_signal() to not remove SIGKILL from t->pending
> >> for the start... not sure, this needs another discussion.
> >
> > Thank you for this! Between that and Eric's line about:
> >
> >> Frankly that there are some left over SIGKILL bits in the pending mask
> >> is a misfeature, and it is definitely not something you should count on.
> >
> > I think I finally maybe understand the objections.
> >
> > Is it fair to say that a task with PF_EXITING should never wait? I'm
> > wondering if a solution would be to patch the wait code to look for
> > PF_EXITING, in addition to checking the signal state.
> 
> That will at a minimum change zap_pid_ns_processes to busy wait
> instead of sleeping while it waits for children to die.
> 
> So we would need to survey the waits that can happen when closing file
> descriptors and any other place on the exit path to see how much impact
> a such a change would do.

Oh, yes, of course.

> It might be possible to allow an extra SIGKILL to terminate such waits.
> We do something like that for coredumps.  But that is incredibly subtle
> and a pain to maintain so I want to avoid that if we can.

Yeah, it feels better to clean up these waits. If we thought we got
them all we could maybe even stick a WARN() in the wait code.

> >> Finally. if fuse_flush() wants __fatal_signal_pending() == T when the
> >> caller exits, perhaps it can do it itself? Something like
> >> 
> >> 	if (current->flags & PF_EXITING) {
> >> 		spin_lock_irq(siglock);
> >> 		set_thread_flag(TIF_SIGPENDING);
> >> 		sigaddset(&current->pending.signal, SIGKILL);
> >> 		spin_unlock_irq(siglock);
> >> 	}
> >> 
> >> Sure, this is ugly as hell. But perhaps this can serve as a workaround?
> >
> > or even just
> >
> >     if (current->flags & PF_EXITING)
> >         return;
> >
> > since we don't have anyone to send the result of the flush to anyway.
> > If we don't end up converging on a fix here, I'll just send that
> > patch. Thanks for the suggestion.
> 
> If that was limited to the case you care about that would be reasonable.
> 
> That will have an effect on any time a process that opens files on a
> fuse filesystem exits and depends upon the exit path to close it's file
> descriptors to the fuse filesystem.
> 
> 
> I do see a plausible solution along those lines.
> 
> In fuse_flush instead of using fuse_simple_request call an equivalent
> function that when PF_EXITING is true skips calling request_wait_answer.
> Or perhaps when PF_EXITING is set uses schedule_work to call the
> request_wait_answer.

I don't see why this is any different than what I proposed. It changes
the semantics to flush happening out-of-order with task exit, instead
of strictly before, which you point out might be a problem. What am I
missing?

Tycho

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ