[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bug-220594-13602-Z3beIa2CkE@https.bugzilla.kernel.org/>
Date: Mon, 24 Nov 2025 16:33:23 +0000
From: bugzilla-daemon@...nel.org
To: linux-ext4@...r.kernel.org
Subject: [Bug 220594] Online defragmentation has broken in 6.16
https://bugzilla.kernel.org/show_bug.cgi?id=220594
--- Comment #13 from Theodore Tso (tytso@....edu) ---
On Mon, Nov 24, 2025 at 04:13:27PM +0000, bugzilla-daemon@...nel.org wrote:
> > And for the files that were failing, if you unmount the file system
> > and remount it, can you then defrag the file in question? If the
>
> No. Tried that thrice.
Can you try that again, and verify using strace that you get the same
EBUSY error (as opposed to some other error) after unmounting and
remounting the file system? At this point, I don't want to take
*anything* for granted.
Given that past attempts where you've sent me a metadata-only e2image
dump, I haven't been able to reproduce it, are you willing to build an
upstream kernel (as opposed to a Fedora kernel), and demonstrate that
it reproduces on an upstream kernel? If so, would you be willing to
run an upstream kernel with some printk debugging added so we can see
what is going on --- since again, I still haven't been able to
reprdouce it on my systems.
> > What this means is that if the file has pages which need to be written
> > out to the final location on disk (e.g., if you are in data=journal
>
> Journalling is disabled on all my ext4 partitions.
So you are running a file system with ^has_journal? Can you send me a
copy dumpe2fs -h on that file system?
Something else to do. For those files for which e2defrag is failing
reliably after an unmount/remount, are you reproducing the failure by
running e4defrag on just that one file, or by iterating over the
entire file system? If it reproduces reliably where you try
defragging just that one file, can you try using debugfs's "stat"
command and see what might be different on that file versus some file
for which e4defrag on just that one file *does* work?
e.g.:
debugfs /dev/hdXX
debugfs: stat groups
Inode: 177 Type: regular Mode: 0755 Flags: 0x80000
Generation: 0 Version: 0x00000000:00000000
User: 0 Group: 0 Project: 0 Size: 43432
File ACL: 0
Links: 1 Blockcount: 88
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x6916c804:00000000 -- Fri Nov 14 01:11:16 2025
atime: 0x6916c879:00000000 -- Fri Nov 14 01:13:13 2025
mtime: 0x684062bd:00000000 -- Wed Jun 4 11:14:05 2025
crtime: 0x6924883d:00000000 -- Mon Nov 24 11:30:53 2025
Size of extra inode fields: 32
Inode checksum: 0x2e204798
EXTENTS:
(0-10):9368-9378
Finally, I'm curious --- if it's only just a few files out of hundreds
of thousands of files, why do you *care*? You seem to be emphatic
about calling online defragmentation *broken* and seem outraged that
no one else seems to be discussing or working this issue. Why is this
a high priority issue for you?
Thanks,
- Ted
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
Powered by blists - more mailing lists