[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bug-217965-13602-acQutmnfg8@https.bugzilla.kernel.org/>
Date: Thu, 16 Nov 2023 03:15:33 +0000
From: bugzilla-daemon@...nel.org
To: linux-ext4@...r.kernel.org
Subject: [Bug 217965] ext4(?) regression since 6.5.0 on sata hdd
https://bugzilla.kernel.org/show_bug.cgi?id=217965
--- Comment #27 from Ojaswin Mujoo (ojaswin.mujoo@....com) ---
Hey Eyal,
So the way most file systems handle their writes is:
1. Data is written to memory buffers aka pagecache
2. When writeback/flush kicks in, FS tries to group adjacent data together and
allocates disk blocks for it
3. Finally, send the data to lower levels like blocks layer -> raid -> scsi etc
for the actual write.
The practice of delaying the actual allocation till writeback/flush is known as
delayed allocation or delalloc in ext4 and is on by default (other FSes might
have different names). This is why the ext4 allocation related functions
(ext4_mb_regular_allocator etc) show up in your perf report of the flusher
thread.
With delaloc, we are sending bigger requests to the ext4 allocator since we try
to group together buffers. With nodelalloc we disable this so fs block
allocation happens when we are dirtying the buffers (in step 1 above) and we
only allocate as much as that write asked for thus sending smaller requests at
a time. Since with delalloc we see that your flusher seemed to be taking a lot
of time in ext4 allocation routines, I wanted to check if a change in
allocation pattern via nodelalloc could help us narrow down the issue.
Using:
$ sudo mount -o remount,nodelalloc /data1
should be safe and preserve your other mount options so you can give it a try.
Lastly, thanks for the perf report however I'm sorry I forgot to mention that i
was actually looking for the call graph, which could be collected as follows:
$ sudo perf record -p 1234 -g sleep 60
Can you please share the report of the above command.
Thanks!
Ojaswin
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
Powered by blists - more mailing lists