[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aYSCs6kyIZJS5MW4@dread.disaster.area>
Date: Thu, 5 Feb 2026 22:44:51 +1100
From: Dave Chinner <david@...morbit.com>
To: alexjlzheng@...il.com
Cc: cem@...nel.org, linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
Jinliang Zheng <alexjlzheng@...cent.com>
Subject: Re: [PATCH 2/2] xfs: take a breath in xfsaild()
On Thu, Feb 05, 2026 at 04:26:21PM +0800, alexjlzheng@...il.com wrote:
> From: Jinliang Zheng <alexjlzheng@...cent.com>
>
> We noticed a softlockup like:
>
> crash> bt
> PID: 5153 TASK: ffff8960a7ca0000 CPU: 115 COMMAND: "xfsaild/dm-4"
> #0 [ffffc9001b1d4d58] machine_kexec at ffffffff9b086081
> #1 [ffffc9001b1d4db8] __crash_kexec at ffffffff9b20817a
> #2 [ffffc9001b1d4e78] panic at ffffffff9b107d8f
> #3 [ffffc9001b1d4ef8] watchdog_timer_fn at ffffffff9b243511
> #4 [ffffc9001b1d4f28] __hrtimer_run_queues at ffffffff9b1e62ff
> #5 [ffffc9001b1d4f80] hrtimer_interrupt at ffffffff9b1e73d4
> #6 [ffffc9001b1d4fd8] __sysvec_apic_timer_interrupt at ffffffff9b07bb29
> #7 [ffffc9001b1d4ff0] sysvec_apic_timer_interrupt at ffffffff9bd689f9
> --- <IRQ stack> ---
> #8 [ffffc90031cd3a18] asm_sysvec_apic_timer_interrupt at ffffffff9be00e86
> [exception RIP: part_in_flight+47]
> RIP: ffffffff9b67960f RSP: ffffc90031cd3ac8 RFLAGS: 00000282
> RAX: 00000000000000a9 RBX: 00000000000c4645 RCX: 00000000000000f5
> RDX: ffffe89fffa36fe0 RSI: 0000000000000180 RDI: ffffffff9d1ae260
> RBP: ffff898083d30000 R8: 00000000000000a8 R9: 0000000000000000
> R10: ffff89808277d800 R11: 0000000000001000 R12: 0000000101a7d5be
> R13: 0000000000000000 R14: 0000000000001001 R15: 0000000000001001
> ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
> #9 [ffffc90031cd3ad8] update_io_ticks at ffffffff9b6602e4
> #10 [ffffc90031cd3b00] bdev_start_io_acct at ffffffff9b66031b
> #11 [ffffc90031cd3b20] dm_io_acct at ffffffffc18d7f98 [dm_mod]
> #12 [ffffc90031cd3b50] dm_submit_bio_remap at ffffffffc18d8195 [dm_mod]
> #13 [ffffc90031cd3b70] dm_split_and_process_bio at ffffffffc18d9799 [dm_mod]
> #14 [ffffc90031cd3be0] dm_submit_bio at ffffffffc18d9b07 [dm_mod]
> #15 [ffffc90031cd3c20] __submit_bio at ffffffff9b65f61c
> #16 [ffffc90031cd3c38] __submit_bio_noacct at ffffffff9b65f73e
> #17 [ffffc90031cd3c80] xfs_buf_ioapply_map at ffffffffc23df4ea [xfs]
This isn't from a TOT kernel. xfs_buf_ioapply_map() went away a year
ago. What kernel is this occurring on?
> #18 [ffffc90031cd3ce0] _xfs_buf_ioapply at ffffffffc23df64f [xfs]
> #19 [ffffc90031cd3d50] __xfs_buf_submit at ffffffffc23df7b8 [xfs]
> #20 [ffffc90031cd3d70] xfs_buf_delwri_submit_buffers at ffffffffc23dffbd [xfs]
> #21 [ffffc90031cd3df8] xfsaild_push at ffffffffc24268e5 [xfs]
> #22 [ffffc90031cd3eb8] xfsaild at ffffffffc2426f88 [xfs]
> #23 [ffffc90031cd3ef8] kthread at ffffffff9b1378fc
> #24 [ffffc90031cd3f30] ret_from_fork at ffffffff9b042dd0
> #25 [ffffc90031cd3f50] ret_from_fork_asm at ffffffff9b007e2b
>
> This patch adds cond_resched() to avoid softlockups similar to the one
> described above.
Again: how do this softlock occur?
xfsaild_push() pushes at most 1000 items at a time for IO. It would
have to be a fairly fast device not to block on the request queues
filling as we submit batches of 1000 buffers at a time.
Then the higher level AIL traversal loop would also have to be
making continuous progress without blocking. Hence it must not hit
the end of the AIL, nor ever hit pinned, stale, flushing or locked
items in the AIL for as long as it takes for the soft lookup timer
to fire. This seems ... highly unlikely.
IOWs, if we are looping in this path without giving up the CPU for
seconds at a time, then it is not behaving as I'd expect it to
behave. We need to understand why is this code apparently behaving
in an unexpected way, not just silence the warning....
Can you please explain how the softlockup timer is being hit here so we
can try to understand the root cause of the problem? Workload,
hardware, filesystem config, storage stack, etc all matter here,
because they all play a part in these paths never blocking on
a lock, a full queue, a pinned buffer, etc, whilst processing
hundreds of thousands of dirty objects for IO.
At least, I'm assuming we're talking about hundreds of thousands of
objects, because I know the AIL can push a hundred thousand dirty
buffers to disk every second when it is close to being CPU bound. So
if it's not giving up the CPU for long enough to fire the soft
lockup timer, we must be talking about processing millions of
objects without blocking even once....
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists