[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <pti3xai6zkvitsqaw54sxut4lpb4qupo4c2n5alesb35ndhxv4@ni7ritoqopfe>
Date: Mon, 11 Sep 2023 20:19:21 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Christoph Hellwig <hch@....de>
Cc: linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
Hannes Reinecke <hare@...e.de>,
Sagi Grimberg <sagi@...mberg.me>,
Jason Gunthorpe <jgg@...pe.ca>,
James Smart <james.smart@...adcom.com>,
Chaitanya Kulkarni <kch@...dia.com>
Subject: Re: [RFC v1 4/4] nvmet-discovery: do not use invalid port
On Mon, Sep 11, 2023 at 04:44:47PM +0200, Daniel Wagner wrote:
> On Tue, Sep 05, 2023 at 12:40:25PM +0200, Daniel Wagner wrote:
> > > But I'm still confused how we can get here without req->port
> > > set. Can you try to do a little more analysis as I suspect we have
> > > a deeper problem somewhere.
>
> The problem is that nvme/005 starts to cleanup all resources and there
> is a race between the cleanup path and the host trying to figure out
> what's going on (get log page).
>
> We have 3 association:
>
> assoc 0: systemd/udev triggered 'nvme connect-all' discovery controller
> assoc 1: discovery controller from nvme/005
> assoc 2: i/o controller from nvme/005
Actually, assoc 1 is also a i/o controller for the same hostnqn as assoc
2. This looks more like it assoc 0 and 1 are both from systemd/udev
trigger. But why the heck is the hostnqn for assoc 1 the same as the
hostnqn we use in blktests? Something is really off here.
The rest of my analysis is still valid.
Powered by blists - more mailing lists