lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPpK+O2SBm6-zqbiDbUB0yubVTvTrXWn1R+GAPne_+LGvVXp6g@mail.gmail.com>
Date: Tue, 15 Apr 2025 16:57:29 -0700
From: Randy Jennings <randyj@...estorage.com>
To: Sagi Grimberg <sagi@...mberg.me>
Cc: Daniel Wagner <dwagner@...e.de>, Mohamed Khalfella <mkhalfella@...estorage.com>, 
	Daniel Wagner <wagi@...nel.org>, Christoph Hellwig <hch@....de>, Keith Busch <kbusch@...nel.org>, 
	Hannes Reinecke <hare@...e.de>, John Meneghini <jmeneghi@...hat.com>, linux-nvme@...ts.infradead.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 3/3] nvme: delay failover by command quiesce timeout

On Tue, Apr 15, 2025 at 4:35 PM Sagi Grimberg <sagi@...mberg.me> wrote:
>
>
> >> What I meant was that the user can no longer set kato to be arbitrarily
> >> long when we
> >> now introduce failover dependency on it.
> >>
> >> We need to set a sane maximum value that will failover in a reasonable
> >> timeframe.
> >> In other words, kato cannot be allowed to be set by the user to 60
> >> minutes. While we didn't
> >> care about it before, now it means that failover may take 60+ minutes.
> >>
> >> Hence, my request to set kato to a max absolute value of seconds. My
> >> vote was 10 (2x of the default),
> >> but we can also go with 30.
> > Adding a maximum value for KATO makes a lot of sense to me.  This will
> > help keep us away from a hung task timeout when the full delay is
> > taken into account.  30 makes sense to me from the perspective that
> > the maximum should be long enough to handle non-ideal situations
> > functionally, but not a value that you expect people to use regularly.
> >
> > I think CQT should have a maximum allowed value for similar reasons.
> > If we do clamp down on the CQT, we could be opening ourselves to the
> > target not completely cleaning up, but it keeps us from a hung task
> > timeout, and _any_ delay will help most of the time.
>
> CQT comes from the controller, and if it is high, it effectively means
> that the
> controller cannot handle faster failover reliably. So I think we should
> leave it
> as is. It is the vendor problem.
Okay, that is one way to approach it.  However, because of the hung
task issue, we would be allowing the vendor to panic the initiator
with a hung task.  Until CCR, and without implementing other checks
(for events which might not happen), this hung task would happen on
every messy disconnect with that vendor/array.

Sincerely,
Randy Jennings

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ