[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ebfmki7mifmo67x27wwrdpabdbiamalj7rsevxvabyi4sff4ck@4d5fyvrjggkw>
Date: Mon, 11 Sep 2023 16:44:46 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Christoph Hellwig <hch@....de>
Cc: linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
Hannes Reinecke <hare@...e.de>,
Sagi Grimberg <sagi@...mberg.me>,
Jason Gunthorpe <jgg@...pe.ca>,
James Smart <james.smart@...adcom.com>,
Chaitanya Kulkarni <kch@...dia.com>
Subject: Re: [RFC v1 4/4] nvmet-discovery: do not use invalid port
On Tue, Sep 05, 2023 at 12:40:25PM +0200, Daniel Wagner wrote:
> > But I'm still confused how we can get here without req->port
> > set. Can you try to do a little more analysis as I suspect we have
> > a deeper problem somewhere.
The problem is that nvme/005 starts to cleanup all resources and there
is a race between the cleanup path and the host trying to figure out
what's going on (get log page).
We have 3 association:
assoc 0: systemd/udev triggered 'nvme connect-all' discovery controller
assoc 1: discovery controller from nvme/005
assoc 2: i/o controller from nvme/005
nvme/005 issues a reset_controller but doesn't waiting or checking the
result. Instead we go directly into the resource cleanup part, nvme
disconnect which removes assoc 1 and assoc. Then the target cleanup
part starts. At this point, assoc 0 is still around.
nvme nvme3: Removing ctrl: NQN "blktests-subsystem-1"
block nvme3n1: no available path - failing I/O
block nvme3n1: no available path - failing I/O
Buffer I/O error on dev nvme3n1, logical block 89584, async page read
(NULL device *): {0:2} Association deleted
nvmet_fc: nvmet_fc_portentry_unbind: tgtport 000000004f5c9138 pe 00000000e2a2da84
[321] nvmet: ctrl 2 stop keep-alive
(NULL device *): {0:1} Association freed
(NULL device *): Disconnect LS failed: No Association
general protection fault, probably for non-canonical address 0xdffffc00000000a4: 0000 [#1] PREEMPT SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000520-0x0000000000000527]
CPU: 1 PID: 250 Comm: kworker/1:4 Tainted: G W 6.5.0-rc2+ #20 e82c2becb08b573f1fa41dfeddc70ac8f6838a63
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022
Workqueue: nvmet-wq fcloop_fcp_recv_work [nvme_fcloop]
RIP: 0010:nvmet_execute_disc_get_log_page+0x23f/0x8c0 [nvmet]
The target cleanup removes the port from the subsystem
(nvmet_fc_portentry_unbind) and does not check if there is still a
association around. Right after we have removed assoc 1 and 2 the host
sends a get log page command on assoc 0. Though we have remove the port
binding and thus the pointer when nvmet_execute_disc_get_log_page gets
executed.
I am still pondering how to fix this.
Powered by blists - more mailing lists