[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210730094907.5vg7qebggttibogz@beryllium.lan>
Date: Fri, 30 Jul 2021 11:49:07 +0200
From: Daniel Wagner <dwagner@...e.de>
To: linux-nvme@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org,
James Smart <james.smart@...adcom.com>,
Keith Busch <kbusch@...nel.org>,
Ming Lei <ming.lei@...hat.com>,
Sagi Grimberg <sagi@...mberg.me>,
Hannes Reinecke <hare@...e.de>, Wen Xiong <wenxiong@...ibm.com>
Subject: Re: [PATCH v3 0/6] Handle update hardware queues and queue freeze
more carefully
On Mon, Jul 26, 2021 at 07:27:04PM +0200, Daniel Wagner wrote:
> FTR, I've tested the 'prior_ioq_cnt != nr_io_queues' case. In this
> scenario the series works. Though in the case of 'prior_ioq_cnt ==
> nr_io_queues' I see hanging I/Os.
Back on starring on this issue. So the hanging I/Os happen in this path
after a remote port has been disabled:
nvme nvme1: NVME-FC{1}: new ctrl: NQN "nqn.1992-08.com.netapp:sn.d646dc63336511e995cb00a0988fb732:subsystem.nvme-svm-dolin-ana_subsystem"
nvme nvme1: NVME-FC{1}: controller connectivity lost. Awaiting Reconnect
nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
nvme nvme1: NVME-FC{1}: io failed due to lldd error 6
nvme nvme1: NVME-FC{1}: connectivity re-established. Attempting reconnect
nvme nvme1: NVME-FC{1}: create association : host wwpn 0x100000109b579ef6 rport wwpn 0x201900a09890f5bf: NQN "nqn.1992-08.com.netapp:sn.d646dc63336511e995cb00a0988fb732:subsystem.nvme-svm-dolin-ana_subsystem"
nvme nvme1: NVME-FC{1}: controller connect complete
and all hanging tasks have the same call trace:
task:fio state:D stack: 0 pid:13545 ppid: 13463 flags:0x00000000
Call Trace:
__schedule+0x2d7/0x8f0
schedule+0x3c/0xa0
blk_queue_enter+0x106/0x1f0
? wait_woken+0x80/0x80
submit_bio_noacct+0x116/0x4b0
? submit_bio+0x4b/0x1a0
submit_bio+0x4b/0x1a0
__blkdev_direct_IO_simple+0x20c/0x350
? update_load_avg+0x1ac/0x5e0
? blkdev_iopoll+0x30/0x30
? blkdev_direct_IO+0x4a2/0x520
blkdev_direct_IO+0x4a2/0x520
? update_load_avg+0x1ac/0x5e0
? update_load_avg+0x1ac/0x5e0
? generic_file_read_iter+0x84/0x140
? __blkdev_direct_IO_simple+0x350/0x350
generic_file_read_iter+0x84/0x140
blkdev_read_iter+0x41/0x50
new_sync_read+0x118/0x1a0
vfs_read+0x15a/0x180
ksys_pread64+0x71/0x90
do_syscall_64+0x3c/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
(gdb) l *blk_queue_enter+0x106
0xffffffff81473736 is in blk_queue_enter (block/blk-core.c:469).
464 * queue dying flag, otherwise the following wait may
465 * never return if the two reads are reordered.
466 */
467 smp_rmb();
468
469 wait_event(q->mq_freeze_wq,
470 (!q->mq_freeze_depth &&
471 blk_pm_resume_queue(pm, q)) ||
472 blk_queue_dying(q));
473 if (blk_queue_dying(q))
Powered by blists - more mailing lists