[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <pzgeaisqmqz5up5fiorl46lmb6xglpdu4hp5lxotclnzvpfjrj@mgyfvcrvpl4x>
Date: Tue, 12 Sep 2023 08:38:32 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Christoph Hellwig <hch@....de>, g@...urine.lan
Cc: linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
Hannes Reinecke <hare@...e.de>,
Sagi Grimberg <sagi@...mberg.me>,
Jason Gunthorpe <jgg@...pe.ca>,
James Smart <james.smart@...adcom.com>,
Chaitanya Kulkarni <kch@...dia.com>
Subject: Re: [RFC v1 4/4] nvmet-discovery: do not use invalid port
> We have 3 association:
> >
> > assoc 0: systemd/udev triggered 'nvme connect-all' discovery controller
> > assoc 1: discovery controller from nvme/005
> > assoc 2: i/o controller from nvme/005
>
> Actually, assoc 1 is also a i/o controller for the same hostnqn as assoc
> 2. This looks more like it assoc 0 and 1 are both from systemd/udev
> trigger. But why the heck is the hostnqn for assoc 1 the same as the
> hostnqn we use in blktests? Something is really off here.
>
> The rest of my analysis is still valid.
I figured it out. I still used an older version nvme-cli which didn't
apply the hostnqn correctly. We fixed this in the meantime. With the
latest git version of nvme-cli the second I/O controller is not setup
anymore and the crash is gone. Though we still don't cleanup all
resources as the kernel module complains with
[41707.083039] nvmet_fc: nvmet_fc_exit_module: targetport list not empty
Powered by blists - more mailing lists