[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZIaBQnCKJ6NsqGhd@dread.disaster.area>
Date: Mon, 12 Jun 2023 12:21:54 +1000
From: Dave Chinner <david@...morbit.com>
To: "Darrick J. Wong" <djwong@...nel.org>
Cc: Zorro Lang <zlang@...hat.com>, linux-xfs@...r.kernel.org,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Mike Christie <michael.christie@...cle.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: [6.5-rc5 regression] core dump hangs (was Re: [Bug report] fstests
generic/051 (on xfs) hang on latest linux v6.5-rc5+)
On Sun, Jun 11, 2023 at 06:51:45PM -0700, Darrick J. Wong wrote:
> On Mon, Jun 12, 2023 at 09:01:19AM +1000, Dave Chinner wrote:
> > On Sun, Jun 11, 2023 at 08:48:36PM +0800, Zorro Lang wrote:
> > > Hi,
> > >
> > > When I tried to do fstests regression test this weekend on latest linux
> > > v6.5-rc5+ (HEAD=64569520920a3ca5d456ddd9f4f95fc6ea9b8b45), nearly all
> > > testing jobs on xfs hang on generic/051 (more than 24 hours, still blocked).
> > > No matter 1k or 4k blocksize, general disk or pmem dev, or any architectures,
> > > or any mkfs/mount options testing, all hang there.
> >
> > Yup, I started seeing this on upgrade to 6.5-rc5, too. xfs/079
> > generates it, because the fsstress process is crashing when the
> > XFS filesystems shuts down (maybe a SIGBUS from a mmap page fault?)
> > I don't know how reproducable it is yet; these only showed up in my
> > thrusday night testing so I haven't had a chance to triage it yet.
> >
> > > Someone console log as below (a bit long), the call trace doesn't contains any
> > > xfs functions, it might be not a xfs bug, but it can't be reproduced on ext4.
> >
> > AFAICT, the coredump is being done on the root drive (where fsstress
> > is being executed from), not the XFS test/scratch devices that
> > fsstress processes are exercising. I have ext3 root drives for my
> > test machines, so at this point I'm not sure that this is even a
> > filesystem related regression. i.e. it may be a recent regression in
> > the coredump or signal handling code....
>
> Willy was complaining about the same thing on Friday. Oddly I've not
> seen any problems with coredumps on 6.4-rc5, so .... <shrug>
A quick check against xfs/179 (the case I was seeing) indicates that
it is a regression between v6.4-rc4 and v6.4-rc5.
Looking at the kernel/ changelog, I'm expecting that it is a
regression introduced by this commit:
commit f9010dbdce911ee1f1af1398a24b1f9f992e0080
Author: Mike Christie <michael.christie@...cle.com>
Date: Thu Jun 1 13:32:32 2023 -0500
fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
When switching from kthreads to vhost_tasks two bugs were added:
1. The vhost worker tasks's now show up as processes so scripts doing
ps or ps a would not incorrectly detect the vhost task as another
process. 2. kthreads disabled freeze by setting PF_NOFREEZE, but
vhost tasks's didn't disable or add support for them.
To fix both bugs, this switches the vhost task to be thread in the
process that does the VHOST_SET_OWNER ioctl, and has vhost_worker call
get_signal to support SIGKILL/SIGSTOP and freeze signals. Note that
SIGKILL/STOP support is required because CLONE_THREAD requires
CLONE_SIGHAND which requires those 2 signals to be supported.
This is a modified version of the patch written by Mike Christie
<michael.christie@...cle.com> which was a modified version of patch
originally written by Linus.
Much of what depended upon PF_IO_WORKER now depends on PF_USER_WORKER.
Including ignoring signals, setting up the register state, and having
get_signal return instead of calling do_group_exit.
Tidied up the vhost_task abstraction so that the definition of
vhost_task only needs to be visible inside of vhost_task.c. Making
it easier to review the code and tell what needs to be done where.
As part of this the main loop has been moved from vhost_worker into
vhost_task_fn. vhost_worker now returns true if work was done.
The main loop has been updated to call get_signal which handles
SIGSTOP, freezing, and collects the message that tells the thread to
exit as part of process exit. This collection clears
__fatal_signal_pending. This collection is not guaranteed to
clear signal_pending() so clear that explicitly so the schedule()
sleeps.
For now the vhost thread continues to exist and run work until the
last file descriptor is closed and the release function is called as
part of freeing struct file. To avoid hangs in the coredump
rendezvous and when killing threads in a multi-threaded exec. The
coredump code and de_thread have been modified to ignore vhost threads.
Remvoing the special case for exec appears to require teaching
vhost_dev_flush how to directly complete transactions in case
the vhost thread is no longer running.
Removing the special case for coredump rendezvous requires either the
above fix needed for exec or moving the coredump rendezvous into
get_signal.
Fixes: 6e890c5d5021 ("vhost: use vhost_tasks for worker threads")
Signed-off-by: Eric W. Biederman <ebiederm@...ssion.com>
Co-developed-by: Mike Christie <michael.christie@...cle.com>
Signed-off-by: Mike Christie <michael.christie@...cle.com>
Acked-by: Michael S. Tsirkin <mst@...hat.com>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
I'm just building a v6.5-rc5 with this one commit reverted to test
it right now.....
<a few minutes later>
Yup, 6.5-rc5 with that patch reverted doesn't hang at all. That's
the problem.
I guess the regression fix needs a regression fix....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists