[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231023084943.GE704032@linux.intel.com>
Date: Mon, 23 Oct 2023 10:49:43 +0200
From: Stanislaw Gruszka <stanislaw.gruszka@...ux.intel.com>
To: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Matthew Wilcox <willy@...radead.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>
Subject: Re: [PATCH] XArray: Make xa_lock_init macro
On Mon, Oct 02, 2023 at 10:25:35AM +0200, Stanislaw Gruszka wrote:
> Make xa_init_flags() macro to avoid false positive lockdep splats.
Friendly ping. The subject should be changed to mention xa_init_flags(),
but anything else should be done here to get it apply ?
Regards
Stanislaw
> When spin_lock_init() is used inside initialization function (like
> in xa_init_flags()) which can be called many times, lockdep assign
> the same key to different locks.
>
> For example this splat is seen with intel_vpu driver which uses
> two xarrays and has two separate xa_init_flags() calls:
>
> [ 1139.148679] WARNING: inconsistent lock state
> [ 1139.152941] 6.6.0-hardening.1+ #2 Tainted: G OE
> [ 1139.158758] --------------------------------
> [ 1139.163024] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
> [ 1139.169018] kworker/10:1/109 [HC1[1]:SC0[0]:HE0:SE1] takes:
> [ 1139.174576] ffff888137237150 (&xa->xa_lock#18){?.+.}-{2:2}, at: ivpu_mmu_user_context_mark_invalid+0x1c/0x80 [intel_vpu]
> [ 1139.185438] {HARDIRQ-ON-W} state was registered at:
> [ 1139.190305] lock_acquire+0x1a3/0x4a0
> [ 1139.194055] _raw_spin_lock+0x2c/0x40
> [ 1139.197800] ivpu_submit_ioctl+0xf0b/0x3520 [intel_vpu]
> [ 1139.203114] drm_ioctl_kernel+0x201/0x3f0 [drm]
> [ 1139.207791] drm_ioctl+0x47d/0xa20 [drm]
> [ 1139.211846] __x64_sys_ioctl+0x12e/0x1a0
> [ 1139.215849] do_syscall_64+0x59/0x90
> [ 1139.219509] entry_SYSCALL_64_after_hwframe+0x6e/0xd8
> [ 1139.224636] irq event stamp: 45500
> [ 1139.228037] hardirqs last enabled at (45499): [<ffffffff92ef0314>] _raw_spin_unlock_irq+0x24/0x50
> [ 1139.236961] hardirqs last disabled at (45500): [<ffffffff92eadf8f>] common_interrupt+0xf/0x90
> [ 1139.245457] softirqs last enabled at (44956): [<ffffffff92ef3430>] __do_softirq+0x4c0/0x712
> [ 1139.253862] softirqs last disabled at (44461): [<ffffffff907df310>] irq_exit_rcu+0xa0/0xd0
> [ 1139.262098]
> other info that might help us debug this:
> [ 1139.268604] Possible unsafe locking scenario:
>
> [ 1139.274505] CPU0
> [ 1139.276955] ----
> [ 1139.279403] lock(&xa->xa_lock#18);
> [ 1139.282978] <Interrupt>
> [ 1139.285601] lock(&xa->xa_lock#18);
> [ 1139.289345]
> *** DEADLOCK ***
>
> Lockdep falsely identified xa_lock from two different xarrays as the same
> lock and report deadlock. More detailed description of the problem
> is provided in commit c21f11d182c2 ("drm: fix drmm_mutex_init()")
>
> Signed-off-by: Stanislaw Gruszka <stanislaw.gruszka@...ux.intel.com>
> ---
> include/linux/xarray.h | 17 +++++++----------
> 1 file changed, 7 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/xarray.h b/include/linux/xarray.h
> index cb571dfcf4b1..409d9d739ee9 100644
> --- a/include/linux/xarray.h
> +++ b/include/linux/xarray.h
> @@ -375,12 +375,12 @@ void xa_destroy(struct xarray *);
> *
> * Context: Any context.
> */
> -static inline void xa_init_flags(struct xarray *xa, gfp_t flags)
> -{
> - spin_lock_init(&xa->xa_lock);
> - xa->xa_flags = flags;
> - xa->xa_head = NULL;
> -}
> +#define xa_init_flags(_xa, _flags) \
> +do { \
> + spin_lock_init(&(_xa)->xa_lock);\
> + (_xa)->xa_flags = (_flags); \
> + (_xa)->xa_head = NULL; \
> +} while (0)
>
> /**
> * xa_init() - Initialise an empty XArray.
> @@ -390,10 +390,7 @@ static inline void xa_init_flags(struct xarray *xa, gfp_t flags)
> *
> * Context: Any context.
> */
> -static inline void xa_init(struct xarray *xa)
> -{
> - xa_init_flags(xa, 0);
> -}
> +#define xa_init(xa) xa_init_flags(xa, 0)
>
> /**
> * xa_empty() - Determine if an array has any present entries.
> --
> 2.25.1
>
Powered by blists - more mailing lists