lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250307123255.GK354511@nvidia.com>
Date: Fri, 7 Mar 2025 08:32:55 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: John Hubbard <jhubbard@...dia.com>,
	Greg KH <gregkh@...uxfoundation.org>,
	Danilo Krummrich <dakr@...nel.org>,
	Joel Fernandes <joelagnelf@...dia.com>,
	Alexandre Courbot <acourbot@...dia.com>,
	Dave Airlie <airlied@...il.com>, Gary Guo <gary@...yguo.net>,
	Joel Fernandes <joel@...lfernandes.org>,
	Boqun Feng <boqun.feng@...il.com>, Ben Skeggs <bskeggs@...dia.com>,
	linux-kernel@...r.kernel.org, rust-for-linux@...r.kernel.org,
	nouveau@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
	paulmck@...nel.org
Subject: Re: [RFC PATCH 0/3] gpu: nova-core: add basic timer subdevice
 implementation

On Fri, Mar 07, 2025 at 11:28:37AM +0100, Simona Vetter wrote:

> > I wouldn't say it is wrong. It is still the correct thing to do, and
> > following down the normal cleanup paths is a good way to ensure the
> > special case doesn't have bugs. The primary difference is you want to
> > understand the device is dead and stop waiting on it faster. Drivers
> > need to consider these things anyhow if they want resiliency against
> > device crashes, PCI link wobbles and so on that don't involve
> > remove().
> 
> Might need to revisit that discussion, but Greg didn't like when we asked
> for a pci helper to check whether the device is physically gone (at least
> per the driver model). Hacking that in drivers is doable, but feels
> icky.

I think Greg is right here, the driver model has less knowledge than
the driver if the device is alive.

The resiliency/fast-failure issue is not just isolated to having
observed a proper hot-unplug, but there are many classes of failure
that cause the device HW to malfunction that a robust driver can
detect and recover from. mlx5 attempts to do this for instance.

It turns out when you deploy clusters with 800,000 NICs in them there
are weird HW fails constantly and you have to be resilient on the SW
side and try to recover from them when possible.

So I'd say checking for a -1 read return on PCI is a sufficient
technique for the driver to use to understand if it's device is still
present. mlx5 devices further have an interactive register operation
"health check" that proves the device and it's PCI path is alive.

Failing health checks trigger recovery, which shoot downs sleeps,
cleanly destroys stuff, resets the device, and starts running
again. IIRC this is actually done with a rdma hot unplug/plug sequence
autonomously executed inside the driver.

A driver can do a health check immediately in remove() and make a
decision if the device is alive or not to speed up removal in the
hostile hot unplug case.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ