[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<DU0PR04MB9496B72135BFB95A93DB577E90F8A@DU0PR04MB9496.eurprd04.prod.outlook.com>
Date: Fri, 31 Oct 2025 02:31:20 +0000
From: Bough Chen <haibo.chen@....com>
To: Theodore Tso <tytso@....edu>
CC: "jack@...e.cz" <jack@...e.cz>, "adilger.kernel@...ger.ca"
<adilger.kernel@...ger.ca>, "linux-ext4@...r.kernel.org"
<linux-ext4@...r.kernel.org>, "imx@...ts.linux.dev" <imx@...ts.linux.dev>
Subject: RE: ext4 issue on linux-next(next-20251030)
Hi Theodore,
Thanks for your quick reply.
root@...6ul7d:~# e2image -Q /dev/mmcblk2p2 fs.qcow2
e2image 1.47.3 (8-Jul-2025)
root@...6ul7d:~# bzip2 -z fs.qcow2
for the fs.qcow2.bz, please refer to the attachement.
For this /dev/mmcblk2p2, sometimes umount do not meet this issue, but after several mount/umount operation, this issue come up again.
I also paste the log of your second suggestion:
root@...6ul7d:~# dumpe2fs -h /dev/mmcblk2p2 [16/1922]
dumpe2fs 1.47.3 (8-Jul-2025)
Filesystem volume name: root
Last mounted on: <not available>
Filesystem UUID: dc06048e-939b-4827-97ef-f815486f505f
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index orphan_file filetype extent 64bit flex_bg metadata_csum_seed sparse_super large_file huge_file dir_nli
nk extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 570080
Block count: 1139298
Reserved block count: 56964
Overhead clusters: 56548
Free blocks: 419806
Free inodes: 505121
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 556
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16288
Inode blocks per group: 1018
Flex block group size: 16
Filesystem created: Tue Apr 5 23:00:00 2011
Last mount time: Fri Oct 31 02:18:16 2025
Last write time: Fri Oct 31 02:18:17 2025
Mount count: 6
Maximum mount count: -1
Last checked: Thu Oct 30 10:35:48 2025
Check interval: 0 (<none>)
Lifetime writes: 4248 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: ff13b55b-8055-50d5-88d5-80782d2e8e86
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0xff831d21
Checksum seed: 0x794f2ccc
Orphan file inode: 12
Journal features: (none)
Total journal size: 64M
Total journal blocks: 16384
Max transaction length: 16384
Fast commit length: 0
Journal sequence: 0x00000002
Journal start: 0
Regards
Haibo Chen
> -----Original Message-----
> From: Theodore Tso <tytso@....edu>
> Sent: 2025年10月31日 9:34
> To: Bough Chen <haibo.chen@....com>
> Cc: jack@...e.cz; adilger.kernel@...ger.ca; linux-ext4@...r.kernel.org;
> imx@...ts.linux.dev
> Subject: Re: ext4 issue on linux-next(next-20251030)
>
> On Thu, Oct 30, 2025 at 11:11:51AM +0000, Bough Chen wrote:
> > Hi Jack,
> >
> > On the latest linux-next, I find your patch acf943e9768e ("ext4: fix checks for
> orphan inodes") trigger the following issue on our imx7d-sdb board.
> > I do not have enough background knowledge of ext4, so don't know why
> > there are orphan inodes on the partition with ext4. Not sure whether this is a
> real issue or we need some special operation on current ext4 partition.
>
> If you are willing to let me to see your file names, you could send me just the
> metadata blocks so I can examine file system image. The details are in the
> REPORTING BUGS section of the e2fsck man page and as well as the RAW
> IMAGE FILE and QCOW2 IMAGE FILE sections of the e2image man page, but the
> short version is:
>
>
> e2image -Q /dev/mmcblkp2p2 fs.qcow2
> bzip2 -z fs.qcow2
>
> ... and then send me the fs.qcow2.bz file.
>
> If you aren't please try running "dumpe2fs -h /dev/mmcblk2p2" and send me
> the output.
>
> Thanks,
>
> - Ted
Download attachment "fs.qcow2.bz2" of type "application/octet-stream" (1265456 bytes)
Powered by blists - more mailing lists