[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <945f4ee5-3d9b-4c4c-8d45-ec493a9dcb4c@grimberg.me>
Date: Thu, 17 Apr 2025 01:15:16 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Randy Jennings <randyj@...estorage.com>
Cc: Daniel Wagner <dwagner@...e.de>,
Mohamed Khalfella <mkhalfella@...estorage.com>,
Daniel Wagner <wagi@...nel.org>, Christoph Hellwig <hch@....de>,
Keith Busch <kbusch@...nel.org>, Hannes Reinecke <hare@...e.de>,
John Meneghini <jmeneghi@...hat.com>, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 3/3] nvme: delay failover by command quiesce timeout
>> CQT comes from the controller, and if it is high, it effectively means
>> that the
>> controller cannot handle faster failover reliably. So I think we should
>> leave it
>> as is. It is the vendor problem.
> Okay, that is one way to approach it. However, because of the hung
> task issue, we would be allowing the vendor to panic the initiator
> with a hung task. Until CCR, and without implementing other checks
> (for events which might not happen), this hung task would happen on
> every messy disconnect with that vendor/array.
Its kind of pick your poison situation I guess.
We can log an error for controllers that expose overly long CQT...
Not sure we'll see a hung task here tho, its not like there is a kthread
blocking
on this, its a delayed work so I think the watchdog won't complain about
it...
Powered by blists - more mailing lists