lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190424185937.GB11428@google.com>
Date:   Wed, 24 Apr 2019 13:59:37 -0500
From:   Bjorn Helgaas <helgaas@...nel.org>
To:     Lyude Paul <lyude@...hat.com>
Cc:     David Ober <dober@...ovo.com>, linux-pci@...r.kernel.org,
        nouveau@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
        Karol Herbst <kherbst@...hat.com>,
        Ben Skeggs <skeggsb@...il.com>, stable@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Hans de Goede <hdegoede@...hat.com>
Subject: Re: [PATCH] pci/quirks: Add quirk to reset nvgpu at boot for the
 Lenovo ThinkPad P50

On Mon, Apr 15, 2019 at 02:07:18PM -0400, Lyude Paul wrote:
> On Thu, 2019-04-04 at 09:17 -0500, Bjorn Helgaas wrote:
> > [+cc Hans, author of 0b2fe6594fa2 ("drm/nouveau: Queue hpd_work on (runtime)
> > resume")]
> > 
> > On Fri, Mar 22, 2019 at 06:30:15AM -0500, Bjorn Helgaas wrote:
> > > On Thu, Mar 21, 2019 at 05:48:19PM -0500, Bjorn Helgaas wrote:
> > > > On Wed, Mar 13, 2019 at 06:25:02PM -0400, Lyude Paul wrote:
> > > > > On Fri, 2019-02-15 at 16:17 -0500, Lyude Paul wrote:
> > > > > > On Thu, 2019-02-14 at 18:43 -0600, Bjorn Helgaas wrote:
> > > > > > > On Tue, Feb 12, 2019 at 05:02:30PM -0500, Lyude Paul wrote:
> > > > > > > > On a very specific subset of ThinkPad P50 SKUs, particularly
> > > > > > > > ones that come with a Quadro M1000M chip instead of the M2000M
> > > > > > > > variant, the BIOS seems to have a very nasty habit of not
> > > > > > > > always resetting the secondary Nvidia GPU between full reboots
> > > > > > > > if the laptop is configured in Hybrid Graphics mode. The
> > > > > > > > reason for this happening is unknown, but the following steps
> > > > > > > > and possibly a good bit of patience will reproduce the issue:
> > > > > > > > 
> > > > > > > > 1. Boot up the laptop normally in Hybrid graphics mode
> > > > > > > > 2. Make sure nouveau is loaded and that the GPU is awake
> > > > > > > > 2. Allow the nvidia GPU to runtime suspend itself after being
> > > > > > > > idle
> > > > > > > > 3. Reboot the machine, the more sudden the better (e.g sysrq-b
> > > > > > > > may help)
> > > > > > > > 4. If nouveau loads up properly, reboot the machine again and go
> > > > > > > > back to
> > > > > > > > step 2 until you reproduce the issue
> > > > > > > > 
> > > > > > > > This results in some very strange behavior: the GPU will quite
> > > > > > > > literally be left in exactly the same state it was in when the
> > > > > > > > previously booted kernel started the reboot. This has all
> > > > > > > > sorts of bad sideaffects: for starters, this completely breaks
> > > > > > > > nouveau starting with a mysterious EVO channel failure that
> > > > > > > > happens well before we've actually used the EVO channel for
> > > > > > > > anything:
> > > > 
> > > > Thanks for the hybrid tutorial (snipped from this response).  IIUC,
> > > > what you said was that in hybrid mode, the Intel GPU drives the
> > > > built-in display and the Nvidia GPU drives any external displays and
> > > > may be used for DRI PRIME rendering (whatever that is).  But since you
> > > > say the Nvidia device gets runtime suspended, I assume there's no
> > > > external display here and you're not using DRI PRIME.
> > > > 
> > > > I wonder if it's related to the fact that the Nvidia GPU has been
> > > > runtime suspended before you do the reboot.  Can you try turning of
> > > > runtime power management for the GPU by setting the runpm module
> > > > parameter to 0?  I *think* this would be booting with
> > > > "nouveau.runpm=0".
> > > 
> > > Sorry, I wasn't really thinking here.  You already *said* this is
> > > related to runtime suspend.  It only happens when the Nvidia GPU has
> > > been suspended.
> > > 
> > > I don't know that much about suspend, but ISTR seeing comments about
> > > resuming devices before we shutdown.  If we do that, maybe there's
> > > some kind of race between that resume and the reboot?
> > 
> > I think we do in fact resume PCI devices before shutdown.  Here's the
> > path I'm looking at:
> > 
> >   device_shutdown
> >     pm_runtime_get_noresume
> >     pm_runtime_barrier
> >     dev->bus->shutdown
> >       pci_device_shutdown
> >         pm_runtime_resume
> >           __pm_runtime_resume(dev, 0)
> >             rpm_resume(dev, 0)
> >               __update_runtime_status(dev, RPM_RESUMING)
> >               callback = RPM_GET_CALLBACK(dev, runtime_resume)
> >               rpm_callback(callback, dev)
> >                 __rpm_callback
> >                   pci_pm_runtime_resume
> >                     drv->pm->runtime_resume
> >                       nouveau_pmops_runtime_resume
> >                         nouveau_do_resume
> >                         schedule_work(hpd_work)   # <---
> >                         ...
> >                         nouveau_display_hpd_work
> >                           pm_runtime_get_sync
> >                           drm_helper_hpd_irq_event
> >                           pm_runtime_mark_last_busy
> >                           pm_runtime_put_sync
> > 
> > I'm curious about that "schedule_work(hpd_work)" near the end because
> > no other drivers seem to use schedule_work() in the runtime_resume
> > path, and I don't know how that synchronizes with the shutdown
> > process.  I don't see anything that waits for
> > nouveau_display_hpd_work() to complete, so it seems like something
> > that could be a race.
> > 
> > I wonder this problem would be easier to reproduce if you added a
> > sleep in nouveau_display_hpd_work() as in the first hunk below, and I
> > wonder if the problem would then go away if you stopped scheduling
> > hpd_work as in the second hunk?  Obviously the second hunk isn't a
> > solution, it's just an attempt to figure out if I'm looking in the
> > right area.
> 
> So I did try making it so that we cancel hpd_work from
> pci_driver->shutdown and wait for it to complete, however that
> didn't really seem to make any difference. 

Not being a scheduled work expert, I was unsure if this experiment was
equivalent to what I proposed.

I'm always suspicious of singleton solutions like this (using
schedule_work() in runtime_resume()) because usually they seem to be
solving a generic problem that should happen on many kinds of
hardware.  The 0b2fe6594fa2 ("drm/nouveau: Queue hpd_work on (runtime)
resume") commit log says:

  We need to call drm_helper_hpd_irq_event() on resume to properly
  detect monitor connection / disconnection on some laptops, use
  hpd_work for this to avoid deadlocks.

The situation of a monitor being connected or disconnected during
suspend can happen to *any* GPU, but the commit only changes nouveau,
which of course raises the question of how we deal with that in other
drivers.  If the Nvidia GPU has some unique behavior related to
monitor connection, that would explain special-case code there, but
the commit doesn't mention anything like that.

It should be simple to revert 0b2fe6594fa2 and see whether it changes
the behavior at all (well, simple except for the fact that this
problem isn't 100% reproducible in the first place).

> Do we want to have this discussion on the bz btw, or is this email
> thread fine?

Email is fine.

> > diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c
> > b/drivers/gpu/drm/nouveau/nouveau_display.c
> > index 55c0fa451163..e50806012d41 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_display.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_display.c
> > @@ -350,6 +350,7 @@ nouveau_display_hpd_work(struct work_struct *work)
> >  
> >  	pm_runtime_get_sync(drm->dev->dev);
> >  
> > +	msleep(2000);
> >  	drm_helper_hpd_irq_event(drm->dev);
> >  
> >  	pm_runtime_mark_last_busy(drm->dev->dev);
> > 
> > 
> > 
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c
> > b/drivers/gpu/drm/nouveau/nouveau_drm.c
> > index 5020265bfbd9..48da72caa017 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_drm.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
> > @@ -946,9 +946,6 @@ nouveau_pmops_runtime_resume(struct device *dev)
> >  	nvif_mask(&device->object, 0x088488, (1 << 25), (1 << 25));
> >  	drm_dev->switch_power_state = DRM_SWITCH_POWER_ON;
> >  
> > -	/* Monitors may have been connected / disconnected during suspend */
> > -	schedule_work(&nouveau_drm(drm_dev)->hpd_work);
> > -
> >  	return ret;
> >  }
> >  
> -- 
> Cheers,
> 	Lyude Paul
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ