[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250416-tonlage-gesund-160868ceccc1@brauner>
Date: Wed, 16 Apr 2025 21:47:34 +0200
From: Christian Brauner <brauner@...nel.org>
To: Nathan Chancellor <nathan@...nel.org>
Cc: Oleg Nesterov <oleg@...hat.com>, linux-fsdevel@...r.kernel.org,
Jeff Layton <jlayton@...nel.org>, Lennart Poettering <lennart@...ttering.net>,
Daan De Meyer <daan.j.demeyer@...il.com>, Mike Yuan <me@...dnzj.com>, linux-kernel@...r.kernel.org,
Peter Ziljstra <peterz@...radead.org>
Subject: Re: [PATCH v2 0/2] pidfs: ensure consistent ENOENT/ESRCH reporting
On Wed, Apr 16, 2025 at 03:55:48PM +0200, Christian Brauner wrote:
> On Tue, Apr 15, 2025 at 03:34:54PM -0700, Nathan Chancellor wrote:
> > Hi Christian,
> >
> > On Fri, Apr 11, 2025 at 03:22:43PM +0200, Christian Brauner wrote:
> > > In a prior patch series we tried to cleanly differentiate between:
> > >
> > > (1) The task has already been reaped.
> > > (2) The caller requested a pidfd for a thread-group leader but the pid
> > > actually references a struct pid that isn't used as a thread-group
> > > leader.
> > >
> > > as this was causing issues for non-threaded workloads.
> > >
> > > But there's cases where the current simple logic is wrong. Specifically,
> > > if the pid was a leader pid and the check races with __unhash_process().
> > > Stabilize this by using the pidfd waitqueue lock.
> >
> > After the recent work in vfs-6.16.pidfs (I tested at
> > a9d7de0f68b79e5e481967fc605698915a37ac13), I am seeing issues with using
> > 'machinectl shell' to connect to a systemd-nspawn container on one of my
> > machines running Fedora 41 (the container is using Rawhide).
> >
> > $ machinectl shell -q nathan@...V_IMG $SHELL -l
> > Failed to get shell PTY: Connection timed out
> >
> > My initial bisect attempt landed on the merge of the first series
> > (1e940fff9437), which does not make much sense because 4fc3f73c16d was
> > allegedly good in my test, but I did not investigate that too hard since
> > I have lost enough time on this as it is heh. It never reproduces at
> > 6.15-rc1 and it consistently reproduces at a9d7de0f68b so I figured I
> > would report it here since you mention this series is a fix for the
> > first one. If there is any other information I can provide or patches I
> > can test (either as fixes or for debugging), I am more than happy to do
> > so.
I can't reproduce this issue at all with vfs-6.16.pidfs unfortunately.
>
> Does the following patch make a difference for you?:
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index f7403e1fb0d4..dd30f7e09917 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -2118,7 +2118,7 @@ int pidfd_prepare(struct pid *pid, unsigned int flags, struct file **ret)
> scoped_guard(spinlock_irq, &pid->wait_pidfd.lock) {
> /* Task has already been reaped. */
> if (!pid_has_task(pid, PIDTYPE_PID))
> - return -ESRCH;
> + return -EINVAL;
> /*
> * If this struct pid isn't used as a thread-group
> * leader but the caller requested to create a
>
> If it did it would be weird if the first merge is indeed marked as good.
> What if you used a non-rawhide version of systemd? Because this might
> also be a regression on their side.
Powered by blists - more mailing lists