[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZrlRggozUT6dJRh+@dread.disaster.area>
Date: Mon, 12 Aug 2024 10:04:18 +1000
From: Dave Chinner <david@...morbit.com>
To: Anders Blomdell <anders.blomdell@...il.com>
Cc: linux-xfs@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Chandan Babu R <chandan.babu@...cle.com>,
"Darrick J. Wong" <djwong@...nel.org>,
Christoph Hellwig <hch@....de>
Subject: Re: XFS mount timeout in linux-6.9.11
On Sun, Aug 11, 2024 at 10:17:50AM +0200, Anders Blomdell wrote:
> On 2024-08-11 01:11, Dave Chinner wrote:
> > On Sat, Aug 10, 2024 at 10:29:38AM +0200, Anders Blomdell wrote:
> > > On 2024-08-10 00:55, Dave Chinner wrote:
> > > > On Fri, Aug 09, 2024 at 07:08:41PM +0200, Anders Blomdell wrote:
> > > echo $(uname -r) $(date +%H:%M:%S) > /dev/kmsg
> > > mount /dev/vg1/test /test
> > > echo $(uname -r) $(date +%H:%M:%S) > /dev/kmsg
> > > umount /test
> > > echo $(uname -r) $(date +%H:%M:%S) > /dev/kmsg
> > > mount /dev/vg1/test /test
> > > echo $(uname -r) $(date +%H:%M:%S) > /dev/kmsg
> > >
> > > [55581.470484] 6.8.0-rc4-00129-g14dd46cf31f4 09:17:20
> > > [55581.492733] XFS (dm-7): Mounting V5 Filesystem e2159bbc-18fb-4d4b-a6c5-14c97b8e5380
> > > [56048.292804] XFS (dm-7): Ending clean mount
> > > [56516.433008] 6.8.0-rc4-00129-g14dd46cf31f4 09:32:55
> >
> > So it took ~450s to determine that the mount was clean, then another
> > 450s to return to userspace?
> Yeah, that aligns with my userspace view that the mount takes 15 minutes.
> >
> > > [56516.434695] XFS (dm-7): Unmounting Filesystem e2159bbc-18fb-4d4b-a6c5-14c97b8e5380
> > > [56516.925145] 6.8.0-rc4-00129-g14dd46cf31f4 09:32:56
> > > [56517.039873] XFS (dm-7): Mounting V5 Filesystem e2159bbc-18fb-4d4b-a6c5-14c97b8e5380
> > > [56986.017144] XFS (dm-7): Ending clean mount
> > > [57454.876371] 6.8.0-rc4-00129-g14dd46cf31f4 09:48:34
> >
> > Same again.
> >
> > Can you post the 'xfs_info /mnt/pt' for that filesystem?
> # uname -r ; xfs_info /test
> 6.8.0-rc4-00128-g8541a7d9da2d
> meta-data=/dev/mapper/vg1-test isize=512 agcount=8, agsize=268435455 blks
> = sectsz=4096 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=0, rmapbt=0
> = reflink=1 bigtime=0 inobtcount=0 nrext64=0
> data = bsize=4096 blocks=2147483640, imaxpct=20
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0, ftype=1
> log =internal log bsize=4096 blocks=521728, version=2
> = sectsz=4096 sunit=1 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
Ok, nothing I'd consider strange there.
> > > And rebooting to the kernel before the offending commit:
> > >
> > > [ 60.177951] 6.8.0-rc4-00128-g8541a7d9da2d 10:23:00
> > > [ 61.009283] SGI XFS with ACLs, security attributes, realtime, scrub, quota, no debug enabled
> > > [ 61.017422] XFS (dm-7): Mounting V5 Filesystem e2159bbc-18fb-4d4b-a6c5-14c97b8e5380
> > > [ 61.351100] XFS (dm-7): Ending clean mount
> > > [ 61.366359] 6.8.0-rc4-00128-g8541a7d9da2d 10:23:01
> > > [ 61.367673] XFS (dm-7): Unmounting Filesystem e2159bbc-18fb-4d4b-a6c5-14c97b8e5380
> > > [ 61.444552] 6.8.0-rc4-00128-g8541a7d9da2d 10:23:01
> > > [ 61.459358] XFS (dm-7): Mounting V5 Filesystem e2159bbc-18fb-4d4b-a6c5-14c97b8e5380
> > > [ 61.513938] XFS (dm-7): Ending clean mount
> > > [ 61.524056] 6.8.0-rc4-00128-g8541a7d9da2d 10:23:01
> >
> > Yeah, that's what I'd expect to see.
Ok, can you run the same series of commands but this time in another
shell run this command and leave it running for the entire
mount/unmount/mount/unmount sequence:
# trace-cmd record -e xfs\* -e printk
Then ctrl-c out of it, and run:
# trace-cmd report > xfs-mount-report.<kernel>.txt
on both kernels and send me the output (or a link that I can
download because it will probably be quite large even when
compressed) that is generated?
That will tell me what XFS is doing different at mount time on the
different kernels.
[snip stuff about git bisect]
I'll come back to the bisect if it's relevant once I know what XFS
is doing differently across the unmount/mount cycles on the two
different kernels.
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists