[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTinF7AUUAzWz2jGGXPiJNb23rHyPFQ@mail.gmail.com>
Date: Mon, 30 May 2011 20:11:21 +0200
From: Denys Vlasenko <vda.linux@...glemail.com>
To: Tejun Heo <tj@...nel.org>
Cc: jan.kratochvil@...hat.com, oleg@...hat.com,
linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, indan@....nu
Subject: Re: execve-under-ptrace API bug (was Re: Ptrace documentation, draft #3)
> Ok, let's take a deeper look at API needs. What we need to report, and when?
>
> We have three kinds of threads at execve:
> 1. execve'ing thread,
> 2. leader, two cases: (2a) leader is still alive, (2b) leader has exited by now.
> 3. other threads.
>
> (3) is the most simple: API should report death of these threads.
> There is no need to ensure these death notifications are reported
> before execve syscall exit is reported. They can be consumed
> by tracer later.
>
> (1) execve'ing thread is obviously alive. current kernel already
> reports its execve success. The only thing we need to add is
> a way to retrieve its former pid, so that tracer can drop
> former pid's data, and also to cater for the "two execve's" case.
> PTRACE_EVENT_EXEC seems to be a good place to do it.
> Say, using GETEVENTMSG?
>
> (2) is the most problematic. If leader is still alive, should
> we report its death? This makes sense since if we do,
> and if we ensure its death is always reported before
> PTRACE_EVENT_EXEC, then the rule is pretty simple:
> at PTRACE_EVENT_EXEC, leader is always reported dead.
>
> However, I don't see why we _must_ do it this way.
> The life of tracer is not that much worse if at
> PTRACE_EVENT_EXEC leader which is still alive
> is simply "supplanted" by the execve'ed process.
>
> We definitely must ensure, though, that if leader races with
> execve'ing thread and enters exit(2), its death is never reported
> *after* PTRACE_EVENT_EXEC - that'd confuse the tracer for sure!
> Process which has exited but is still alive?! Not good!
FWIW, here is the current behavior (2.6.38.6-27.fc15.i686.PAE).
Test program creates two threads and execve's from last thread.
PTRACE_O_TRACECLONE | PTRACE_O_TRACEEXIT | PTRACE_O_TRACEEXEC
is requested by tracer.
Compiled attached program with gcc -Wall threaded-execve.c,
ran it and I see this:
6797: thread leader
6797: status:0003057f WIFSTOPPED sig:5 (TRAP) event:CLONE
6798: status:0000137f WIFSTOPPED sig:19 (STOP) event:(null)
6797: status:0003057f WIFSTOPPED sig:5 (TRAP) event:CLONE
6799: status:0000137f WIFSTOPPED sig:19 (STOP) event:(null)
6798: status:0006057f WIFSTOPPED sig:5 (TRAP) event:EXIT
6797: status:0006057f WIFSTOPPED sig:5 (TRAP) event:EXIT
6798: status:00000000 WIFEXITED exitcode:0
6797: status:0004057f WIFSTOPPED sig:5 (TRAP) event:EXEC
6797: status:0003057f WIFSTOPPED sig:5 (TRAP) event:CLONE
6800: status:0000137f WIFSTOPPED sig:19 (STOP) event:(null)
6797: status:0003057f WIFSTOPPED sig:5 (TRAP) event:CLONE
6801: status:0000137f WIFSTOPPED sig:19 (STOP) event:(null)
6800: status:0006057f WIFSTOPPED sig:5 (TRAP) event:EXIT
6797: status:0006057f WIFSTOPPED sig:5 (TRAP) event:EXIT
6800: status:00000000 WIFEXITED exitcode:0
6797: status:0004057f WIFSTOPPED sig:5 (TRAP) event:EXEC
...
...
...
In short, it doesn't look too bad: we do get EXIT events for both
destroyed threads, and even get WIFEXITED for the non-leader.
(IOW: maybe PTRACE_O_TRACEEXIT is not even needed!)
EXEC event is reported last (also good!)
Oleg, does it look like it works as intended, or am I just lucky?
I guess I need to test larger number of threads, and throw in some races...
--
vda
View attachment "threaded-execve.c" of type "text/x-csrc" (7182 bytes)
Powered by blists - more mailing lists