[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230607123624.15739-1-petr.pavlu@suse.com>
Date: Wed, 7 Jun 2023 14:36:24 +0200
From: Petr Pavlu <petr.pavlu@...e.com>
To: jgross@...e.com, sstabellini@...nel.org,
oleksandr_tyshchenko@...m.com
Cc: xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
Petr Pavlu <petr.pavlu@...e.com>
Subject: [PATCH] xen/xenbus: Avoid a lockdep warning when adding a watch
The following lockdep warning appears during boot on a Xen dom0 system:
[ 96.388794] ======================================================
[ 96.388797] WARNING: possible circular locking dependency detected
[ 96.388799] 6.4.0-rc5-default+ #8 Tainted: G EL
[ 96.388803] ------------------------------------------------------
[ 96.388804] xenconsoled/1330 is trying to acquire lock:
[ 96.388808] ffffffff82acdd10 (xs_watch_rwsem){++++}-{3:3}, at: register_xenbus_watch+0x45/0x140
[ 96.388847]
but task is already holding lock:
[ 96.388849] ffff888100c92068 (&u->msgbuffer_mutex){+.+.}-{3:3}, at: xenbus_file_write+0x2c/0x600
[ 96.388862]
which lock already depends on the new lock.
[ 96.388864]
the existing dependency chain (in reverse order) is:
[ 96.388866]
-> #2 (&u->msgbuffer_mutex){+.+.}-{3:3}:
[ 96.388874] __mutex_lock+0x85/0xb30
[ 96.388885] xenbus_dev_queue_reply+0x48/0x2b0
[ 96.388890] xenbus_thread+0x1d7/0x950
[ 96.388897] kthread+0xe7/0x120
[ 96.388905] ret_from_fork+0x2c/0x50
[ 96.388914]
-> #1 (xs_response_mutex){+.+.}-{3:3}:
[ 96.388923] __mutex_lock+0x85/0xb30
[ 96.388930] xenbus_backend_ioctl+0x56/0x1c0
[ 96.388935] __x64_sys_ioctl+0x90/0xd0
[ 96.388942] do_syscall_64+0x5c/0x90
[ 96.388950] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[ 96.388957]
-> #0 (xs_watch_rwsem){++++}-{3:3}:
[ 96.388965] __lock_acquire+0x1538/0x2260
[ 96.388972] lock_acquire+0xc6/0x2b0
[ 96.388976] down_read+0x2d/0x160
[ 96.388983] register_xenbus_watch+0x45/0x140
[ 96.388990] xenbus_file_write+0x53d/0x600
[ 96.388994] vfs_write+0xe4/0x490
[ 96.389003] ksys_write+0xb8/0xf0
[ 96.389011] do_syscall_64+0x5c/0x90
[ 96.389017] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[ 96.389023]
other info that might help us debug this:
[ 96.389025] Chain exists of:
xs_watch_rwsem --> xs_response_mutex --> &u->msgbuffer_mutex
[ 96.413429] Possible unsafe locking scenario:
[ 96.413430] CPU0 CPU1
[ 96.413430] ---- ----
[ 96.413431] lock(&u->msgbuffer_mutex);
[ 96.413432] lock(xs_response_mutex);
[ 96.413433] lock(&u->msgbuffer_mutex);
[ 96.413434] rlock(xs_watch_rwsem);
[ 96.413436]
*** DEADLOCK ***
[ 96.413436] 1 lock held by xenconsoled/1330:
[ 96.413438] #0: ffff888100c92068 (&u->msgbuffer_mutex){+.+.}-{3:3}, at: xenbus_file_write+0x2c/0x600
[ 96.413446]
An ioctl call IOCTL_XENBUS_BACKEND_SETUP (record #1 in the report)
results in calling xenbus_alloc() -> xs_suspend() which introduces
ordering xs_watch_rwsem --> xs_response_mutex. The xenbus_thread()
operation (record #2) creates xs_response_mutex --> &u->msgbuffer_mutex.
An XS_WATCH write to the xenbus file then results in a complain about
the opposite lock order &u->msgbuffer_mutex --> xs_watch_rwsem.
The dependency xs_watch_rwsem --> xs_response_mutex is spurious. Avoid
it and the warning by changing the ordering in xs_suspend(), first
acquire xs_response_mutex and then xs_watch_rwsem. Reverse also the
unlocking order in xs_suspend_cancel() for consistency, but keep
xs_resume() as is because it needs to have xs_watch_rwsem unlocked only
after exiting xs suspend and re-adding all watches.
Signed-off-by: Petr Pavlu <petr.pavlu@...e.com>
---
drivers/xen/xenbus/xenbus_xs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index 12e02eb01f59..028a182bcc9e 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -840,8 +840,8 @@ void xs_suspend(void)
{
xs_suspend_enter();
- down_write(&xs_watch_rwsem);
mutex_lock(&xs_response_mutex);
+ down_write(&xs_watch_rwsem);
}
void xs_resume(void)
@@ -866,8 +866,8 @@ void xs_resume(void)
void xs_suspend_cancel(void)
{
- mutex_unlock(&xs_response_mutex);
up_write(&xs_watch_rwsem);
+ mutex_unlock(&xs_response_mutex);
xs_suspend_exit();
}
--
2.35.3
Powered by blists - more mailing lists