[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM6PR03MB5170660DA597EAAAC0AC1B5BE4DF0@AM6PR03MB5170.eurprd03.prod.outlook.com>
Date: Sat, 11 Apr 2020 21:15:11 +0200
From: Bernd Edlinger <bernd.edlinger@...mail.de>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Waiman Long <longman@...hat.com>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Alexey Gladkov <gladkov.alexey@...il.com>
Subject: Re: [GIT PULL] Please pull proc and exec work for 5.7-rc1
On 4/11/20 8:29 PM, Linus Torvalds wrote:
> On Sat, Apr 11, 2020 at 11:21 AM Oleg Nesterov <oleg@...hat.com> wrote:
>>
>> On 04/09, Linus Torvalds wrote:
>>>
>>> (1) have execve() not wait for dead threads while holding the cred
>>> mutex
>>
>> This is what I tried to do 3 years ago, see
>
> Well, you did it differently - by moving the "wait for dead threads"
> logic to after releasing the lock.
>
> My simpler patch was lazier - just don't wait for dead threads at all,
> since they are dead and not interesting.
But won't the dead thread's lifetime overlap the new thread's lifetime
from the tracer's POV?
Bernd.
>
> Because even if it's Easter weekend, those threads are not coming back
> to life ;)
>
> You do say in that old patch that we can't just share the signal
> state, but I wonder how true that is. Sharing it with a TASK_ZOMBIE
> doesn't seem all that problematic to me. The only thing that can do is
> getting reaped by a later wait.
>
> That said, I actually am starting to think that maybe execve() should
> just try to reap those threads instead, and avoid the whole issue that
> way. Basically my "option (2)" thing.
>
> Sure, that's basically stealing them from the parent, but 'execve()'
> really is special wrt threads, and the parent still has the execve()
> thread itself. And it's not so different from SIGKILL, which also
> forcibly breaks off any ptracer etc without anybody being able to say
> anything about it.
>
> Linus
>
Powered by blists - more mailing lists