[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bug-217965-13602-3HIdxjk8HK@https.bugzilla.kernel.org/>
Date: Sat, 23 Dec 2023 01:48:39 +0000
From: bugzilla-daemon@...nel.org
To: linux-ext4@...r.kernel.org
Subject: [Bug 217965] ext4(?) regression since 6.5.0 on sata hdd
https://bugzilla.kernel.org/show_bug.cgi?id=217965
Andreas Dilger (adilger.kernelbugzilla@...ger.ca) changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |adilger.kernelbugzilla@...g
| |er.ca
--- Comment #48 from Andreas Dilger (adilger.kernelbugzilla@...ger.ca) ---
Independent of the fixes to the mballoc code to improve the allocation
performance, I'm wondering about the ''RAID stride'' values in use here.
The "stride" value is intended to be the size of one complete set of
disks (e.g. 128KiB chunk size * 8 data disks = 1MiB). The filesystem
doesn't see the parity disks, so the number of those disks does not
matter to ext4.
> RAID stride: 32752
> In my case, I have an EXT4 partition over an mdadm raid 1 array of two
HDD.
> RAID stride: 32745
It seems in all these cases that the stripe/stride is strange. I can't
see any value to setting stride to (almost) 128MB, especially not on a
RAID-1 system. Were these values automatically generated by mke2fs,
or entered manually? If manually, why was that value chosen? If there
is something unclear in the documentation it should be fixed, and the
same if there is something wrong in mke2fs detection of the geometry.
> By default the FS is mounted with stripe=1280 because it's on a raid6.
> Remounting with stripe=0 works around the problem. Excellent!
Carlos, how many data disks in this system? Do you have 5x 256KiB or
10x 128KiB *data* disks, plus 2 *parity* disks?
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
Powered by blists - more mailing lists