lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Apr 2022 14:00:15 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Oleg Nesterov <oleg@...hat.com>
Cc:     rjw@...ysocki.net, mingo@...nel.org, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, rostedt@...dmis.org, mgorman@...e.de,
        ebiederm@...ssion.com, bigeasy@...utronix.de,
        Will Deacon <will@...nel.org>, linux-kernel@...r.kernel.org,
        tj@...nel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH 2/5] sched,ptrace: Fix ptrace_check_attach() vs PREEMPT_RT

On Fri, Apr 15, 2022 at 12:16:44PM +0200, Oleg Nesterov wrote:
> On 04/15, Peter Zijlstra wrote:
> >
> > On Thu, Apr 14, 2022 at 08:34:33PM +0200, Oleg Nesterov wrote:
> >
> > > If it can work, then 1/5 needs some changes, I think. In particular,
> > > it should not introduce JOBCTL_TRACED_FROZEN until 5/5, and perhaps
> >
> > That TRACED_FROZEN was to distinguish the TASK_TRACED and __TASK_TRACED
> > state, and isn't related to the freezer.
> 
> Lets forget about 3-5 which I didn't read carefully yet. So why do we
> need TRACED_FROZEN?

The purpose of 1/5 was to not have any unique state in __state. To at
all times be able to reconstruct __state from outside information (where
needed).

Agreed that this particular piece of state isn't needed until 5/5, but
the concept is independent (also 5/5 is insanely large already).

> From 1/5:
> 
> 	 static inline void signal_wake_up(struct task_struct *t, bool resume)
> 	 {
> 	+	lockdep_assert_held(&t->sighand->siglock);
> 	+
> 	+	if (resume && !(t->jobctl & JOBCTL_TRACED_FROZEN))
> 	+		t->jobctl &= ~(JOBCTL_STOPPED | JOBCTL_TRACED);
> 	+
> 		signal_wake_up_state(t, resume ? TASK_WAKEKILL : 0);
> 	 }
> 	+
> 	 static inline void ptrace_signal_wake_up(struct task_struct *t, bool resume)
> 	 {
> 	+	lockdep_assert_held(&t->sighand->siglock);
> 	+
> 	+	if (resume)
> 	+		t->jobctl &= ~JOBCTL_TRACED;
> 	+
> 		signal_wake_up_state(t, resume ? __TASK_TRACED : 0);
> 	 }
> 
> Can't we simply change signal_wake_up_state(),
> 
> 	void signal_wake_up_state(struct task_struct *t, unsigned int state)
> 	{
> 		set_tsk_thread_flag(t, TIF_SIGPENDING);
> 		/*
> 		 * TASK_WAKEKILL also means wake it up in the stopped/traced/killable
> 		 * case. We don't check t->state here because there is a race with it
> 		 * executing another processor and just now entering stopped state.
> 		 * By using wake_up_state, we ensure the process will wake up and
> 		 * handle its death signal.
> 		 */
> 		if (wake_up_state(t, state | TASK_INTERRUPTIBLE))
> 			t->jobctl &= ~(JOBCTL_STOPPED | JOBCTL_TRACED);
> 		else
> 			kick_process(t);
> 	}
> 
> ?

This would be broken when we so signal_wake_up_state() when state
doesn't match. Does that happen? I'm thikning siglock protects us from
the most obvious races, but still.

If not broken, then it needs at least a comment explaining why not etc..
I'm sure to not remember many of these details.

Also, signal_wake_up_state() really can do with that
lockdep_assert_held() as well ;-)

> > > 		/*
> > > 		 * We take the read lock around doing both checks to close a
> > > 		 * possible race where someone else attaches or detaches our
> > > 		 * natural child.
> > > 		 */
> > > 		read_lock(&tasklist_lock);
> > > 		traced = child->ptrace && child->parent == current;
> > > 		read_unlock(&tasklist_lock);
> > >
> > > 		if (!traced)
> > > 			return -ESRCH;
> >
> > The thing being, that if it is our ptrace child, it won't be going away
> > since we're running this code and not ptrace_detach().  Right?
> 
> Yes. and nobody else can detach it.
> 
> Another tracer can't attach until child->ptrace is cleared, but this can
> only happen if a) this child is killed and b) another thread does wait()
> and reaps it; but after that attach() is obviously impossible.
> 
> But since this child can go away, the patch changes ptrace_freeze_traced()
> to use lock_task_sighand().

Right.

> > > 		for (;;) {
> > > 			if (fatal_signal_pending(current))
> > > 				return -EINTR;
> >
> > What if signal_wake_up(.resume=true) happens here? In that case we miss
> > the fatal pending, and task state isn't changed yet so we'll happily go
> > sleep.
> 
> No, it won't sleep, see the signal_pending_state() check in schedule().

Urgh, forgot about that one ;-)

> > > 			set_current_state(TASK_KILLABLE);
> 
> And let me explain TASK_KILLABLE just in case... We could just use
> TASK_UNINTERRUPTIBLE and avoid the signal_pending() check, but KILLABLE
> looks "safer" to me. If the tracer hangs because of some bug, at least
> it can be killed from userspace.

Agreed.

> 
> > > 			if (!(READ_ONCE(child->jobctl) & JOBCTL_TRACED)) {
> >
> >   TRACED_XXX ?
> 
> oops ;)
> 
> > > -	spin_lock_irq(&task->sighand->siglock);
> > >  	if (task_is_traced(task) && !looks_like_a_spurious_pid(task) &&
> > >  	    !__fatal_signal_pending(task)) {
> > >  		task->jobctl |= JOBCTL_TRACED_FROZEN;
> > >  		WRITE_ONCE(task->__state, __TASK_TRACED);
> > >  		ret = true;
> > >  	}
> >
> > I would feel much better if this were still a task_func_call()
> > validating !->on_rq && !->on_cpu.
> 
> Well, but "on_rq || on_cpu" would mean that wait_task_inactive() is buggy ?

Yes, but I'm starting to feel a little paranoid here. Better safe than
sorry etc..

> But! I forgot to make anothet change in this code. I do not think it should
> rely on task_is_traced(). We are going to abuse task->__state, so I think
> it should check task->__state == TASK_TRACED directly. Say,
> 
> 	if (READ_ONCE(task->__state) == TASK_TRACED && ...) {
> 		WRITE_ONCE(task->__state, __TASK_TRACED);
> 		WARN_ON_ONCE(!task_is_traced(task));
> 		ret = true;
> 	}
> 
> looks more clean to me. What do you think?

Agreed on this.

> > > @@ -2307,13 +2313,14 @@ static int ptrace_stop(int exit_code, int why, int clear_code,
> > >  		 */
> > >  		if (gstop_done)
> > >  			do_notify_parent_cldstop(current, false, why);
> > > +		clear_traced_xxx();
> > > +		read_unlock(&tasklist_lock);
> > >
> > > -		/* tasklist protects us from ptrace_freeze_traced() */
> > > +		/* JOBCTL_TRACED_XXX protects us from ptrace_freeze_traced() */
> >
> > But... TRACED_XXX has just been cleared ?!
> 
> Cough ;) OK, I'll move __set_current_state() back under tasklist.
> 
> And in this case we do not need wake_up(parent), so we can shift it from
> clear_traced_xxx() into another branch.
> 
> OK, so far it seems that this patch needs a couple of simple fixes you
> pointed out, but before I send V2:
> 
> 	- do you agree we can avoid JOBCTL_TRACED_FROZEN in 1-2 ?

We can for the sake of 2 avoid TRACED_FROZEN, but as explained at the
start, the point of 1 was to ensure there is no unique state in __state,
and I think in that respect we can keep it, hmm?

> 	- will you agree if I change ptrace_freeze_traced() to rely
> 	  on __state == TASK_TRACED rather than task_is_traced() ?

Yes.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ