[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aYUJBChyWi3WdOIR@dread.disaster.area>
Date: Fri, 6 Feb 2026 08:17:56 +1100
From: Dave Chinner <david@...morbit.com>
To: Jinliang Zheng <alexjlzheng@...il.com>
Cc: alexjlzheng@...cent.com, cem@...nel.org, linux-kernel@...r.kernel.org,
linux-xfs@...r.kernel.org
Subject: Re: [PATCH 2/2] xfs: take a breath in xfsaild()
On Thu, Feb 05, 2026 at 08:49:59PM +0800, Jinliang Zheng wrote:
> On Thu, 5 Feb 2026 22:44:51 +1100, david@...morbit.com wrote:
> > On Thu, Feb 05, 2026 at 04:26:21PM +0800, alexjlzheng@...il.com wrote:
> > > From: Jinliang Zheng <alexjlzheng@...cent.com>
> > >
> > > We noticed a softlockup like:
> > >
> > > crash> bt
> > > PID: 5153 TASK: ffff8960a7ca0000 CPU: 115 COMMAND: "xfsaild/dm-4"
> > > #0 [ffffc9001b1d4d58] machine_kexec at ffffffff9b086081
> > > #1 [ffffc9001b1d4db8] __crash_kexec at ffffffff9b20817a
> > > #2 [ffffc9001b1d4e78] panic at ffffffff9b107d8f
> > > #3 [ffffc9001b1d4ef8] watchdog_timer_fn at ffffffff9b243511
> > > #4 [ffffc9001b1d4f28] __hrtimer_run_queues at ffffffff9b1e62ff
> > > #5 [ffffc9001b1d4f80] hrtimer_interrupt at ffffffff9b1e73d4
> > > #6 [ffffc9001b1d4fd8] __sysvec_apic_timer_interrupt at ffffffff9b07bb29
> > > #7 [ffffc9001b1d4ff0] sysvec_apic_timer_interrupt at ffffffff9bd689f9
> > > --- <IRQ stack> ---
> > > #8 [ffffc90031cd3a18] asm_sysvec_apic_timer_interrupt at ffffffff9be00e86
> > > [exception RIP: part_in_flight+47]
> > > RIP: ffffffff9b67960f RSP: ffffc90031cd3ac8 RFLAGS: 00000282
> > > RAX: 00000000000000a9 RBX: 00000000000c4645 RCX: 00000000000000f5
> > > RDX: ffffe89fffa36fe0 RSI: 0000000000000180 RDI: ffffffff9d1ae260
> > > RBP: ffff898083d30000 R8: 00000000000000a8 R9: 0000000000000000
> > > R10: ffff89808277d800 R11: 0000000000001000 R12: 0000000101a7d5be
> > > R13: 0000000000000000 R14: 0000000000001001 R15: 0000000000001001
> > > ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
> > > #9 [ffffc90031cd3ad8] update_io_ticks at ffffffff9b6602e4
> > > #10 [ffffc90031cd3b00] bdev_start_io_acct at ffffffff9b66031b
> > > #11 [ffffc90031cd3b20] dm_io_acct at ffffffffc18d7f98 [dm_mod]
> > > #12 [ffffc90031cd3b50] dm_submit_bio_remap at ffffffffc18d8195 [dm_mod]
> > > #13 [ffffc90031cd3b70] dm_split_and_process_bio at ffffffffc18d9799 [dm_mod]
> > > #14 [ffffc90031cd3be0] dm_submit_bio at ffffffffc18d9b07 [dm_mod]
> > > #15 [ffffc90031cd3c20] __submit_bio at ffffffff9b65f61c
> > > #16 [ffffc90031cd3c38] __submit_bio_noacct at ffffffff9b65f73e
> > > #17 [ffffc90031cd3c80] xfs_buf_ioapply_map at ffffffffc23df4ea [xfs]
> >
> > This isn't from a TOT kernel. xfs_buf_ioapply_map() went away a year
> > ago. What kernel is this occurring on?
>
> Thanks for your reply. :)
>
> It's based on v6.6.
v6.6 was released in late 2023. I think we largely fixed this
problem with this series that was merged into 6.11 in mid 2024:
https://lore.kernel.org/linux-xfs/20220809230353.3353059-1-david@fromorbit.com/
In more detail...
> > Can you please explain how the softlockup timer is being hit here so we
> > can try to understand the root cause of the problem? Workload,
>
> Again, a testsuite combining stress-ng, LTP, and fio, executed concurrently.
>
> > hardware, filesystem config, storage stack, etc all matter here,
>
>
> ================================= CPU ======================================
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Address sizes: 45 bits physical, 48 bits virtual
> Byte Order: Little Endian
> CPU(s): 384
... 384 CPUs banging on a single filesystem....
> ================================= XFS ======================================
> [root@...alhost ~]# xfs_info /dev/ts/home
> meta-data=/dev/mapper/ts-home isize=512 agcount=4, agsize=45875200 blks
... that has very limited parallelism, and ...
> = sectsz=4096 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=1, rmapbt=1
> = reflink=1 bigtime=1 inobtcount=1 nrext64=1
> data = bsize=4096 blocks=183500800, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0, ftype=1
> log =internal log bsize=4096 blocks=89600, version=2
... a relatively small log (350MB) compared to the size of the
system that is hammering on it.
i.e. This is exactly the sort of system architecture that will push
heaps of concurrency into the filesystem's transaction reservation
slow path and keep it there for long periods of time. Especially
under sustained, highly concurrent, modification heavy stress
workloads.
Exposing any kernel spin lock to unbound user controlled
concurrency will eventually result in a workload that causes
catastrophic spin lock contention breakdown. Then everything that
uses said lock will spend excessive amounts of time spinning and not
making progress.
This is one of the scalability problems the patchset I linked above
addressed. Prior to that patchset, the transaction reservation slow
path (the "journal full" path) exposed the AIL lock to unbound
userspace concurrency via the "update the AIL push target"
mechanism. Both journal IO completion and the xfsaild are heavy
users of the AIL lock, but don't normally contend with each other
because internal filesystem concurrency is tightly bounded. Once
userspace starts banging on it, however....
Silencing soft lockups with cond_resched() is almost never the right
thing to do - they are generally indicative of some other problem
occurring. We need to understand what that "some other problem" is
before we do anything else...
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists