[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <denqwui6sl5erqmz2gvrwueyxakl5txzbbiu3fgebryzrfxunm@iwxuthct377m>
Date: Thu, 13 Feb 2025 07:16:31 +0000
From: Shinichiro Kawasaki <shinichiro.kawasaki@....com>
To: Daniel Wagner <wagi@...nel.org>
CC: James Smart <james.smart@...adcom.com>, Keith Busch <kbusch@...nel.org>,
hch <hch@....de>, Sagi Grimberg <sagi@...mberg.me>, Hannes Reinecke
<hare@...e.de>, Paul Ely <paul.ely@...adcom.com>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 3/3] nvme-fc: do not ignore connectivity loss during
connecting
On Jan 09, 2025 / 14:30, Daniel Wagner wrote:
> When a connectivity loss occurs while nvme_fc_create_assocation is
> being executed, it's possible that the ctrl ends up stuck in the LIVE
> state:
>
[...]
>
> There is already the ASSOC_FAILED flag to track connectivity loss event
> but this bit is set too late in the recovery code path. Move this into
> the connectivity loss event handler and synchronize it with the state
> change. This ensures that the ASSOC_FAILED flag is seen by
> nvme_fc_create_io_queues and it does not enter the LIVE state after a
> connectivity loss event. If the connectivity loss event happens after we
> entered the LIVE state the normal error recovery path is executed.
I found many test cases in blktests nvme group fail with v6.14-rc2 kernel with
fc transport. I bisected and found this patch, as the commit ee59e3820ca9, is
the trigger. When I revert the commit from v6.14-rc2, the failure disappears.
Here I share the kernel message log observed at the test case nvme/003. The
kernel reported "BUG: sleeping function called from invalid context".
Feb 12 12:03:08 testnode1 unknown: run blktests nvme/003 at 2025-02-12 12:03:08
Feb 12 12:03:08 testnode1 kernel: loop0: detected capacity change from 0 to 2097152
Feb 12 12:03:08 testnode1 kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1
Feb 12 12:03:08 testnode1 kernel: nvme nvme1: NVME-FC{0}: create association : host wwpn 0x20001100aa000001 rport wwpn 0x20001100ab000001: NQN "nqn.2014-08.org.nvmexpress.discovery"
Feb 12 12:03:08 testnode1 kernel: (NULL device *): {0:0} Association created
Feb 12 12:03:08 testnode1 kernel: nvmet: Created discovery controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
Feb 12 12:03:08 testnode1 kernel: BUG: sleeping function called from invalid context at kernel/workqueue.c:4355
Feb 12 12:03:08 testnode1 kernel: in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 68705, name: kworker/u16:14
Feb 12 12:03:08 testnode1 kernel: preempt_count: 1, expected: 0
Feb 12 12:03:08 testnode1 kernel: RCU nest depth: 0, expected: 0
Feb 12 12:03:08 testnode1 kernel: 3 locks held by kworker/u16:14/68705:
Feb 12 12:03:08 testnode1 kernel: #0: ffff88813a90b148 ((wq_completion)nvme-wq){+.+.}-{0:0}, at: process_one_work+0xf20/0x1460
Feb 12 12:03:08 testnode1 kernel: #1: ffff88813b397d38 ((work_completion)(&(&ctrl->connect_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7de/0x1460
Feb 12 12:03:08 testnode1 kernel: #2: ffff8881ab5bc018 (&ctrl->lock#2){....}-{3:3}, at: nvme_fc_connect_ctrl_work.cold+0x2200/0x2c9d [nvme_fc]
Feb 12 12:03:08 testnode1 kernel: irq event stamp: 140088
Feb 12 12:03:08 testnode1 kernel: hardirqs last enabled at (140087): [<ffffffff90e222ec>] _raw_spin_unlock_irqrestore+0x4c/0x60
Feb 12 12:03:08 testnode1 kernel: hardirqs last disabled at (140088): [<ffffffff90e21fef>] _raw_spin_lock_irqsave+0x5f/0x70
Feb 12 12:03:08 testnode1 kernel: softirqs last enabled at (139874): [<ffffffff8e5158b9>] __irq_exit_rcu+0x109/0x210
Feb 12 12:03:08 testnode1 kernel: softirqs last disabled at (139867): [<ffffffff8e5158b9>] __irq_exit_rcu+0x109/0x210
Feb 12 12:03:08 testnode1 kernel: Preemption disabled at:
Feb 12 12:03:08 testnode1 kernel: [<0000000000000000>] 0x0
Feb 12 12:03:08 testnode1 kernel: CPU: 0 UID: 0 PID: 68705 Comm: kworker/u16:14 Not tainted 6.14.0-rc2 #256
Feb 12 12:03:08 testnode1 kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014
Feb 12 12:03:08 testnode1 kernel: Workqueue: nvme-wq nvme_fc_connect_ctrl_work [nvme_fc]
Feb 12 12:03:08 testnode1 kernel: Call Trace:
Feb 12 12:03:08 testnode1 kernel: <TASK>
Feb 12 12:03:08 testnode1 kernel: dump_stack_lvl+0x6a/0x90
Feb 12 12:03:08 testnode1 kernel: __might_resched.cold+0x1f7/0x23d
Feb 12 12:03:08 testnode1 kernel: ? __pfx___might_resched+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: cancel_delayed_work_sync+0x67/0xb0
Feb 12 12:03:08 testnode1 kernel: nvme_change_ctrl_state+0x104/0x2f0 [nvme_core]
Feb 12 12:03:08 testnode1 kernel: nvme_fc_connect_ctrl_work.cold+0x2252/0x2c9d [nvme_fc]
Feb 12 12:03:08 testnode1 kernel: ? __pfx_nvme_fc_connect_ctrl_work+0x10/0x10 [nvme_fc]
Feb 12 12:03:08 testnode1 kernel: process_one_work+0x85a/0x1460
Feb 12 12:03:08 testnode1 kernel: ? __pfx_lock_acquire+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: ? __pfx_process_one_work+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: ? assign_work+0x16c/0x240
Feb 12 12:03:08 testnode1 kernel: ? lock_is_held_type+0xd5/0x130
Feb 12 12:03:08 testnode1 kernel: worker_thread+0x5e2/0xfc0
Feb 12 12:03:08 testnode1 kernel: ? __kthread_parkme+0xb1/0x1d0
Feb 12 12:03:08 testnode1 kernel: ? __pfx_worker_thread+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: ? __pfx_worker_thread+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: kthread+0x39d/0x750
Feb 12 12:03:08 testnode1 kernel: ? __pfx_kthread+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: ? __pfx_kthread+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: ? __pfx_kthread+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: ret_from_fork+0x30/0x70
Feb 12 12:03:08 testnode1 kernel: ? __pfx_kthread+0x10/0x10
Feb 12 12:03:08 testnode1 kernel: ret_from_fork_asm+0x1a/0x30
Feb 12 12:03:08 testnode1 kernel: </TASK>
Feb 12 12:03:08 testnode1 kernel: nvme nvme1: NVME-FC{0}: controller connect complete
Feb 12 12:03:08 testnode1 kernel: nvme nvme1: NVME-FC{0}: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", hostnqn: nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
Feb 12 12:03:08 testnode1 kernel: nvme nvme2: NVME-FC{1}: create association : host wwpn 0x20001100aa000001 rport wwpn 0x20001100ab000001: NQN "nqn.2014-08.org.nvmexpress.discovery"
Feb 12 12:03:08 testnode1 kernel: (NULL device *): {0:1} Association created
Feb 12 12:03:08 testnode1 kernel: nvmet: Created discovery controller 2 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:f35c1212-8678-4a7f-99e5-05e61788783b.
Feb 12 12:03:08 testnode1 kernel: nvme nvme2: NVME-FC{1}: controller connect complete
Feb 12 12:03:08 testnode1 kernel: nvme nvme2: NVME-FC{1}: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", hostnqn: nqn.2014-08.org.nvmexpress:uuid:f35c1212-8678-4a7f-99e5-05e61788783b
Feb 12 12:03:08 testnode1 kernel: nvme nvme2: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
Feb 12 12:03:08 testnode1 kernel: (NULL device *): {0:1} Association deleted
Feb 12 12:03:08 testnode1 kernel: (NULL device *): {0:1} Association freed
Feb 12 12:03:08 testnode1 kernel: (NULL device *): Disconnect LS failed: No Association
Feb 12 12:03:18 testnode1 kernel: nvme nvme1: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
Feb 12 12:03:18 testnode1 kernel: (NULL device *): {0:0} Association deleted
Feb 12 12:03:18 testnode1 kernel: (NULL device *): {0:0} Association freed
Feb 12 12:03:18 testnode1 kernel: (NULL device *): Disconnect LS failed: No Association
Feb 12 12:03:18 testnode1 kernel: nvme_fc: nvme_fc_create_ctrl: nn-0x10001100ab000001:pn-0x20001100ab000001 - nn-0x10001100aa000001:pn-0x20001100aa000001 combination not found
Powered by blists - more mailing lists