lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 8 Jun 2018 08:09:06 +0200
From:   Heiner Kallweit <hkallweit1@...il.com>
To:     Andrew Lunn <andrew@...n.ch>
Cc:     Florian Fainelli <f.fainelli@...il.com>,
        David Miller <davem@...emloft.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v2 0/2] net: phy: improve PHY suspend/resume

On 05.06.2018 21:39, Heiner Kallweit wrote:
> On 02.06.2018 22:27, Heiner Kallweit wrote:
>> On 01.06.2018 02:10, Andrew Lunn wrote:
>>>> Configuring the different WoL options isn't handled by writing to
>>>> the PHY registers but by writing to chip / MAC registers.
>>>> Therefore phy_suspend() isn't able to figure out whether WoL is
>>>> enabled or not. Only the parent has the full picture.
>>>
>>> Hi Heiner
>>>
>>> I think you need to look at your different runtime PM domains.  If i
>>> understand the code right, you runtime suspend if there is no
>>> link. But for this to work correctly, your PHY needs to keep working.
>>> You also cannot assume all accesses to the PHY go via the MAC. Some
>>> calls will go direct to the PHY, and they can trigger MDIO bus
>>> accesses.  So i think you need two runtime PM domains. MAC and MDIO
>>> bus.  Maybe just the pll? An MDIO bus is a device, so it can have its
>>> on PM callbacks. It is not clear what you need to resume in order to
>>> make MDIO work.
>>>
>> Thanks for your comments!
>> The actual problem is quite small: I get an error at MDIO suspend,
>> the PHY however is suspended later by the driver's suspend callback
>> anyway. Because the problem is small I'm somewhat reluctant to
>> consider bigger changes like introducing different PM domains.
>>
>> Primary reason for the error is that the network chip is in PCI D3hot
>> at that moment. In addition to that for some of the chips supported by
>> the driver also MDIO-relevant PLL's might be disabled.
>>
>> By the way:
>> When checking PM handling for PHY/MDIO I stumbled across something
>> that can be improved IMO, I'll send a patch for your review.
>>
> I experimented a little and with adding Runtime PM to MDIO bus and
> PHY device I can make it work:
> PHY runtime-resumes before entering suspend and resumes its parent
> (MDIO bus) which then resumes its parent (PCI device).
> However this needs quite some code and is hard to read / understand
> w/o reading through this mail thread.
> 
> And in general I still have doubts this is the right way. Let's
> consider the following scenario:
> 
> A network driver configures WoL in its suspend callback
> (incl. setting the device to wakeup-enabled).
> The suspend callback of the PHY is called before this and therefore
> has no clue that WoL will be configured a little later, with the
> consequence that it will do an unsolicited power-down.
> The network driver then has to detect this and power-up the PHY again.
> This doesn't seem to make much sense and still my best idea is to
> establish a mechanism that a device can state: I orchestrate PM
> of my children.
> 
There's one more way to deal with the issue, an empty dev_pm_domain.
We could do:

struct dev_pm_domain empty = { .ops = NULL };
phydev->mdio.dev.pm_domain = empty;

This overrides the device_type pm ops, however I wouldn't necessarily
consider it an elegant solution. Do you have an opinion on that?

I also checked device links, however this doesn't seem to be the right
concept in our case.

> Heiner
> 
>>> It might also help if you do the phy_connect in .ndo_open and
>>> disconnect in .ndo_stop. This is a common pattern in drivers. But some
>>> also do it is probe and remove.
>>>
>> Thanks for the hint. I will move phy_connect_direct accordingly.
>>
>>>      Andrew
>>>
>> Heiner
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ