[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0jQXwxXCzxiLcVnBQPvrAX=GRUr2O=TknmhAS3U_ZTAtg@mail.gmail.com>
Date: Wed, 18 Jul 2018 09:40:44 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Lyude Paul <lyude@...hat.com>
Cc: Lukas Wunner <lukas@...ner.de>, nouveau@...ts.freedesktop.org,
David Airlie <airlied@...ux.ie>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Ben Skeggs <bskeggs@...hat.com>,
Linux PM <linux-pm@...r.kernel.org>
Subject: Re: [Nouveau] [PATCH 1/5] drm/nouveau: Prevent RPM callback recursion
in suspend/resume paths
On Tue, Jul 17, 2018 at 8:34 PM, Lyude Paul <lyude@...hat.com> wrote:
> On Tue, 2018-07-17 at 20:32 +0200, Lukas Wunner wrote:
>> On Tue, Jul 17, 2018 at 02:24:31PM -0400, Lyude Paul wrote:
>> > On Tue, 2018-07-17 at 20:20 +0200, Lukas Wunner wrote:
>> > > Okay, the PCI device is suspending and the nvkm_i2c_aux_acquire()
>> > > wants it in resumed state, so is waiting forever for the device to
>> > > runtime suspend in order to resume it again immediately afterwards.
>> > >
>> > > The deadlock in the stack trace you've posted could be resolved using
>> > > the technique I used in d61a5c106351 by adding the following to
>> > > include/linux/pm_runtime.h:
>> > >
>> > > static inline bool pm_runtime_status_suspending(struct device *dev)
>> > > {
>> > > return dev->power.runtime_status == RPM_SUSPENDING;
>> > > }
>> > >
>> > > static inline bool is_pm_work(struct device *dev)
>> > > {
>> > > struct work_struct *work = current_work();
>> > >
>> > > return work && work->func == dev->power.work;
>> > > }
>> > >
>> > > Then adding this to nvkm_i2c_aux_acquire():
>> > >
>> > > struct device *dev = pad->i2c->subdev.device->dev;
>> > >
>> > > if (!(is_pm_work(dev) && pm_runtime_status_suspending(dev))) {
>> > > ret = pm_runtime_get_sync(dev);
>> > > if (ret < 0 && ret != -EACCES)
>> > > return ret;
>> > > }
>> > >
>> > > But here's the catch: This only works for an *async* runtime suspend.
>> > > It doesn't work for pm_runtime_put_sync(), pm_runtime_suspend() etc,
>> > > because then the runtime suspend is executed in the context of the caller,
>> > > not in the context of dev->power.work.
>> > >
>> > > So it's not a full solution, but hopefully something that gets you
>> > > going. I'm not really familiar with the code paths leading to
>> > > nvkm_i2c_aux_acquire() to come up with a full solution off the top
>> > > of my head I'm afraid.
>> >
>> > OK-I was considering doing something similar to that commit beforehand but I
>> > wasn't sure if I was going to just be hacking around an actual issue. That
>> > doesn't seem to be the case. This is very helpful and hopefully I should be
>> > able
>> > to figure something out from this, thanks!
>>
>> In some cases, the function acquiring the runtime PM ref is only called
>> from a couple of places and then it would be feasible and appropriate
>> to add a bool parameter to the function telling it to acquire the ref
>> or not. So the function is told using a parameter which context it's
>> running in: In the runtime_suspend code path or some other code path.
>>
>> The technique to use current_work() is an alternative approach to figure
>> out the context if passing in an additional parameter is not feasible
>> for some reason. That was the case with d61a5c106351. That approach
>> only works for work items though.
>
> Something I'm curious about. This isn't the first time I've hit a situation like
> this (see: the improper disable_depth fix I added into amdgpu I now need to go
> and fix), which makes me wonder: is there actually any reason Linux's runtime PM
> core doesn't just turn get/puts() in the context of s/r callbacks into no-ops by
> default?
Because it's hard to detect reliably enough and because hiding issues
is a bad idea in general.
As I've just said in the message to Lukas, the fact that you need to
resume another device from within your resume callback indicates that
you're hiding your dependency graph from the core.
Thanks,
Rafael
Powered by blists - more mailing lists