[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111125165409.GA19238@redhat.com>
Date: Fri, 25 Nov 2011 17:54:09 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Pavel Emelyanov <xemul@...allels.com>
Cc: Tejun Heo <tj@...nel.org>, Pedro Alves <pedro@...esourcery.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Cyrill Gorcunov <gorcunov@...nvz.org>,
James Bottomley <jbottomley@...allels.com>
Subject: Re: [RFC][PATCH 0/3] fork: Add the ability to create tasks with
given pids
On 11/25, Pavel Emelyanov wrote:
>
> On 11/25/2011 08:22 PM, Oleg Nesterov wrote:
> > On 11/25, Pavel Emelyanov wrote:
> >>
> >> The proposal is to implement the PR_RESERVE_PID prctl which allocates and puts a
> >> pid on the current. The subsequent fork() uses this pid,
> >
> > Oh. This is subjective, yes, but this doesn't clean to me.
> >
> > Amd why?? On the running system PR_RESERVE_PID can obviously fail anyway.
> > It only helps to avoid the race with another fork.
>
> No. It can fail if you try to allocate a pid with given number. The API allows for
> pid generation. AFAIU this can help with Pedro's requirements to resurrect task with
> the same pid value it used to have before.
Yes gdb can do fork() in a row (until it unreserves the pid) and the
pid will be the same.
OK, I misunderstood. I thought you insist that PR_RESERVE_PID itself
is reliable.
But this can only work in the simplest case. How you can restore the
multithread tracee? You need to unreserve/reserve the previous pid,
and we have the same problems again, no?
> > Yes, and this task_struct->rsv_pid acts as implicit parameter for the
> > next clone(). Doesn't look very nice to me. Plus the code complications.
>
> Well, the last_pid is also an implicit parameter for the next clone() with sysctl
> approach :)
Yes. but it is already here ;)
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists