[<prev] [next>] [day] [month] [year] [list]
Message-ID: <543e68df-e27d-7e6e-0d50-867dd6cf2fe0@tuyoix.net>
Date: Fri, 24 Jan 2025 15:40:43 -0700 (MST)
From: Marc Aurèle La France <tsi@...oix.net>
To: Andrew Morton <akpm@...ux-foundation.org>, Jens Axboe <axboe@...nel.dk>,
linux-mm@...ck.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: kswapd lockdep splat
Hi.
I've yet to find anywhere to post these splats to, except to where
get_maintainer.pl suggests regarding the affected source files,
mm/vmscan.c and block/blk_mq.c.
I don't have a reproduceable case, so haven't been able to bisect
anything.
Let me know if you need more information.
Thanks and have a great day.
Marc.
--8<--
======================================================
WARNING: possible circular locking dependency detected
6.13.0 #1 Not tainted
------------------------------------------------------
kswapd0/70 is trying to acquire lock:
ffff8881025d5d78 (&q->q_usage_counter(io)){++++}-{0:0}, at: blk_mq_submit_bio+0x461/0x6e0
but task is already holding lock:
ffffffff81ef5f40 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x9f/0x760
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire.part.0+0x94/0x1f0
fs_reclaim_acquire+0x8d/0xc0
__kmalloc_node_noprof+0x86/0x360
sbitmap_init_node+0x85/0x200
scsi_realloc_sdev_budget_map+0xc5/0x190
scsi_add_lun+0x3ee/0x6c0
scsi_probe_and_add_lun+0x111/0x290
__scsi_add_device+0xc7/0xd0
ata_scsi_scan_host+0x93/0x1b0
async_run_entry_fn+0x21/0xa0
process_one_work+0x1fd/0x560
worker_thread+0x1bd/0x3a0
kthread+0xdc/0x110
ret_from_fork+0x2b/0x40
ret_from_fork_asm+0x11/0x20
-> #0 (&q->q_usage_counter(io)){++++}-{0:0}:
check_prev_add+0xe2/0xc80
__lock_acquire+0xf37/0x12c0
lock_acquire.part.0+0x94/0x1f0
bio_queue_enter+0xf1/0x220
blk_mq_submit_bio+0x461/0x6e0
__submit_bio+0x95/0x160
submit_bio_noacct_nocheck+0xbd/0x1a0
swap_writepage+0xff/0x1a0
pageout+0xfb/0x2a0
shrink_folio_list+0x57e/0xad0
evict_folios+0x224/0x6e0
try_to_shrink_lruvec+0x186/0x300
shrink_node+0x37f/0x440
balance_pgdat+0x2a4/0x760
kswapd+0x1b3/0x3b0
kthread+0xdc/0x110
ret_from_fork+0x2b/0x40
ret_from_fork_asm+0x11/0x20
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&q->q_usage_counter(io));
lock(fs_reclaim);
rlock(&q->q_usage_counter(io));
*** DEADLOCK ***
1 lock held by kswapd0/70:
#0: ffffffff81ef5f40 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x9f/0x760
stack backtrace:
CPU: 2 UID: 0 PID: 70 Comm: kswapd0 Not tainted 6.13.0 #1
Hardware name: ASUS All Series/Z87-WS, BIOS 2004 06/05/2014
Call Trace:
<TASK>
dump_stack_lvl+0x57/0x80
print_circular_bug.cold+0x38/0x45
check_noncircular+0x107/0x120
? unwind_next_frame+0x318/0x690
check_prev_add+0xe2/0xc80
__lock_acquire+0xf37/0x12c0
? stack_trace_save+0x3b/0x50
lock_acquire.part.0+0x94/0x1f0
? blk_mq_submit_bio+0x461/0x6e0
? rcu_is_watching+0xd/0x40
? lock_acquire+0x100/0x140
? blk_mq_submit_bio+0x461/0x6e0
? bio_queue_enter+0xc9/0x220
bio_queue_enter+0xf1/0x220
? blk_mq_submit_bio+0x461/0x6e0
blk_mq_submit_bio+0x461/0x6e0
? lock_is_held_type+0xc5/0x120
? rcu_is_watching+0xd/0x40
? kmem_cache_alloc_noprof+0x209/0x260
__submit_bio+0x95/0x160
? lock_is_held_type+0xc5/0x120
? submit_bio_noacct_nocheck+0xbd/0x1a0
submit_bio_noacct_nocheck+0xbd/0x1a0
swap_writepage+0xff/0x1a0
pageout+0xfb/0x2a0
shrink_folio_list+0x57e/0xad0
? rcu_is_watching+0xd/0x40
? scan_folios+0x5ce/0x610
? find_held_lock+0x2b/0x80
? mark_held_locks+0x40/0x70
? _raw_spin_unlock_irq+0x1f/0x40
evict_folios+0x224/0x6e0
try_to_shrink_lruvec+0x186/0x300
shrink_node+0x37f/0x440
balance_pgdat+0x2a4/0x760
? lock_acquire.part.0+0x94/0x1f0
kswapd+0x1b3/0x3b0
? ipi_sync_rq_state+0x30/0x30
? balance_pgdat+0x760/0x760
kthread+0xdc/0x110
? kthread_park+0x80/0x80
ret_from_fork+0x2b/0x40
? kthread_park+0x80/0x80
ret_from_fork_asm+0x11/0x20
</TASK>
Powered by blists - more mailing lists