lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 23 Dec 2017 13:39:45 +0100
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Ulf Hansson <ulf.hansson@...aro.org>
Cc:     Kishon Vijay Abraham I <kishon@...com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "Rafael J . Wysocki" <rjw@...ysocki.net>,
        Linux PM <linux-pm@...r.kernel.org>,
        Yoshihiro Shimoda <yoshihiro.shimoda.uh@...esas.com>,
        Geert Uytterhoeven <geert@...ux-m68k.org>,
        Linux-Renesas <linux-renesas-soc@...r.kernel.org>
Subject: Re: [PATCH v2 1/3] phy: core: Move runtime PM reference counting to
 the parent device

On Thu, Dec 21, 2017 at 2:39 AM, Rafael J. Wysocki <rafael@...nel.org> wrote:
> On Wed, Dec 20, 2017 at 3:09 PM, Ulf Hansson <ulf.hansson@...aro.org> wrote:
>> The runtime PM deployment in the phy core is deployed using the phy core
>> device, which is created by the phy core and assigned as a child device of
>> the phy provider device.
>>
>> The behaviour around the runtime PM deployment cause some issues during
>> system suspend, in cases when the phy provider device is put into a low
>> power state via a call to the pm_runtime_force_suspend() helper, as is the
>> case for a Renesas SoC, which has its phy provider device attached to the
>> generic PM domain.
>>
>> In more detail, the problem in this case is that pm_runtime_force_suspend()
>> expects the child device of the provider device, which is the phy core
>> device, to be runtime suspended, else a WARN splat will be printed
>> (correctly) when runtime PM gets re-enabled at system resume.
>
> So we are now trying to work around issues with
> pm_runtime_force_suspend().  Lovely. :-/
>
>> In the current scenario, even if a call to phy_power_off() triggers it to
>> invoke pm_runtime_put() during system suspend, the phy core device doesn't
>> get runtime suspended, because this is prevented in the system suspend
>> phases by the PM core.
>>
>> To solve this problem, let's move the runtime PM deployment from the phy
>> core device to the phy provider device, as this provides the similar
>> behaviour. Changing this makes it redundant to enable runtime PM for the
>> phy core device, so let's avoid doing that.
>
> I'm not really convinced that this approach is the best one to be honest.
>
> I'll have a deeper look at this in the next few days, stay tuned.

So IMO the changes you are proposing make sense regardless of the
genpd issue, because they generally simplify the phy code, but the
additional use_runtime_pm field in struct phy represents redundant
information (manipulating reference counters shouldn't matter if
runtime PM is disabled), so it doesn't appear to be necessary.

[On a related note, I'm not sure why phy tries to intercept runtime PM
errors and "fix up" the reference counters.  That doesn't look right
to me at all.]

That said, the current phy code is not strictly invalid.  While it
looks more complicated than necessary, it doesn't do things documented
as invalid in principle, so saying "The behaviour around the runtime
PM deployment cause some issues during system suspend" in the
changelog is describing the problem from a very specific angle.
Simply put, pm_runtime_force_suspend() and the current phy code cannot
work together and so using them together is a bug.  None of them
individually is at fault, but combining them is incorrect.

Fortunately enough, the phy code can be modified so that it can be
used with pm_runtime_force_suspend() without problems, but picturing
it as "problematic", because it cannot do that today is not entirely
fair IMO.

Thanks,
Rafael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ