[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d3ff0792-5811-40b5-ae98-e6d30281930b@roeck-us.net>
Date: Tue, 5 Aug 2025 12:54:36 -0700
From: Guenter Roeck <linux@...ck-us.net>
To: Brian Norris <briannorris@...omium.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Tsai Sung-Fu <danielsftsai@...gle.com>,
Douglas Anderson <dianders@...omium.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] genirq/test: Resolve irq lock inversion warnings
On 8/5/25 11:32, Brian Norris wrote:
> irq_shutdown_and_deactivate() is normally called with the descriptor
> lock held, and interrupts disabled. Nested a few levels down, it grabs
> the global irq_resend_lock. Lockdep rightfully complains [1].
>
> Grab the descriptor lock, and disable interrupts, to resolve the
> complaint.
>
> Tested with:
>
> tools/testing/kunit/kunit.py run 'irq_test_cases*' \
> --arch x86_64 --qemu_args '-smp 2' \
> --kconfig_add CONFIG_DEBUG_KERNEL=y \
> --kconfig_add CONFIG_PROVE_LOCKING=y \
> --raw_output=all
>
> [1]
> ========================================================
> WARNING: possible irq lock inversion dependency detected
> 6.16.0-11743-g6bcdbd62bd56 #2 Tainted: G N
> --------------------------------------------------------
> kunit_try_catch/40 just changed the state of lock:
> ffffffff898b1538 (irq_resend_lock){+...}-{2:2}, at: clear_irq_resend+0x14/0x70
> but this lock was taken by another, HARDIRQ-safe lock in the past:
> (&irq_desc_lock_class){-.-.}-{2:2}
>
> and interrupts could create inverse lock ordering between them.
>
> other info that might help us debug this:
> Possible interrupt unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(irq_resend_lock);
> local_irq_disable();
> lock(&irq_desc_lock_class);
> lock(irq_resend_lock);
> <Interrupt>
> lock(&irq_desc_lock_class);
>
> [...]
>
> ... key at: [<ffffffff898b1538>] irq_resend_lock+0x18/0x60
> ... acquired at:
> __lock_acquire+0x82b/0x2620
> lock_acquire+0xc7/0x2c0
> _raw_spin_lock+0x2b/0x40
> clear_irq_resend+0x14/0x70
> irq_shutdown_and_deactivate+0x29/0x80
> irq_shutdown_depth_test+0x1ce/0x600
> kunit_try_run_case+0x90/0x120
> kunit_generic_run_threadfn_adapter+0x1c/0x40
> kthread+0xf3/0x200
> ret_from_fork+0x140/0x1b0
> ret_from_fork_asm+0x1a/0x30
>
> [ 5.766715] ok 2 irq_free_disabled_test
> [ 5.769030]
> [ 5.769106] ========================================================
> [ 5.769159] WARNING: possible irq lock inversion dependency detected
> [ 5.769355] 6.16.0-11743-g6bcdbd62bd56 #1 Tainted: G N
> [ 5.769413] --------------------------------------------------------
> [ 5.769465] kunit_try_catch/122 just changed the state of lock:
> [ 5.769532] ffffffffb81ace18 (irq_resend_lock){+...}-{2:2}, at: clear_irq_resend+0x14/0x70
> [ 5.769899] but this lock was taken by another, HARDIRQ-safe lock in the past:
> [ 5.769967] (&irq_desc_lock_class){-.-.}-{2:2}
> [ 5.769989]
> [ 5.769989]
> [ 5.769989] and interrupts could create inverse lock ordering between them.
> ...
> [ 5.776956] ret_from_fork_asm+0x1a/0x30
> [ 5.776983] </TASK>
> [ 5.778916] # irq_shutdown_depth_test: pass:1 fail:0 skip:0 total:1
> [ 5.778953] ok 3 irq_shutdown_depth_test
>
> Fixes: 66067c3c8a1e ("genirq: Add kunit tests for depth counts")
> Reported-by: Guenter Roeck <linux@...ck-us.net>
> Closes: https://lore.kernel.org/lkml/31a761e4-8f81-40cf-aaf5-d220ba11911c@roeck-us.net/
> Signed-off-by: Brian Norris <briannorris@...omium.org>
Tested-by: Guenter Roeck <linux@...ck-us.net>
Thanks for the quick turnaround!
Guenter
Powered by blists - more mailing lists