lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGVrzcah=gQM9NfM9_BRyWvMELsujsA6qGrjEcGwu8Wxb-Y3mw@mail.gmail.com>
Date:	Mon, 22 Jun 2015 21:46:09 -0700
From:	Florian Fainelli <f.fainelli@...il.com>
To:	Andrew Lunn <andrew@...n.ch>
Cc:	Duan Andy <fugang.duan@...escale.com>,
	David Miller <davem@...emloft.net>,
	Cory Tusar <cory.tusar@...1solutions.com>,
	netdev <netdev@...r.kernel.org>
Subject: Re: [PATCHv3 net-next] net: fec: Ensure clocks are enabled while
 using mdio bus

2015-06-22 19:52 GMT-07:00 Andrew Lunn <andrew@...n.ch>:
>> > int mii_id, int regnum)  {
>> >     struct fec_enet_private *fep = bus->priv;
>> >     unsigned long time_left;
>> > +   int ret;
>> > +
>> > +   ret = clk_prepare_enable(fep->clk_ipg);
>> > +   if (ret)
>> > +           return ret;
>> >
>> >     fep->mii_timeout = 0;
>> >     init_completion(&fep->mdio_done);
>> > @@ -1779,11 +1785,14 @@ static int fec_enet_mdio_read(struct mii_bus *bus,
>> > int mii_id, int regnum)
>> >     if (time_left == 0) {
>> >             fep->mii_timeout = 1;
>> >             netdev_err(fep->netdev, "MDIO read timeout\n");
>> > +           clk_disable_unprepare(fep->clk_ipg);
>> >             return -ETIMEDOUT;
>> >     }
>> >
>> > -   /* return value */
>> > -   return FEC_MMFR_DATA(readl(fep->hwp + FEC_MII_DATA));
>> > +   ret = FEC_MMFR_DATA(readl(fep->hwp + FEC_MII_DATA));
>> > +   clk_disable_unprepare(fep->clk_ipg);
>> > +
>> > +   return ret;
>> >  }
>> >
>>
>
>> I suggest you use runtime pm to enable/disable clock for performance
>> consideration. Not every time for mdio bus access needs to
>> enable/disable clock.
>
> Can you explain that some more. When are you suggesting doing a
> runtime enable/disable? Given the current DSA architecture, i would
> probably do a runtime enable as soon as i lookup the mdio bus, and
> never do a runtime disable. Also, how would you include this runtime
> pm into the phy state machine which is going to be polling your mdio
> bus around once per second for normal phys?

I believe the sh_eth driver is an in-tree example of a driver doing
runtime PM for its MDIO bus controller implementation.

>
> Florian, as Maintainer of the phy state machine, what do you think
> about adding runtime PM to it?

I have no objection to adding runtime PM awareness into the PHY
library, runtime PM should work well here since we are doing host
initiated reads, at least for polled PHYs. For interrupt driven PHYs,
I suppose we could also get something to work, although one needs to
be more careful. One potential issue is that runtime PM does not seem
to have some sort of latency built-in into it, so you do not really
know how long it takes to transition from low-power to functional
state where you can issue a MDIO read/write  with success.

You could argue that as long as the sum of the times needed to
wake-up, perform the operation and go back to sleep is below the
polling frequency, you are safe, but a) this does not work for all
device if we ever lower the poll frequency and b) this starts putting
near real-time requirements on doing all of these operations.

I would suspect that unless the clock feeding the MDIO bus controller
feeds over power hungry digital logic, you would get most power
savings from enabling link power management features such as EEE.
Anything else dealing with digital logic might just be noise compared
to doing this. Technically, even between an Ethernet switch and the
CPU port you should be able to do it, provided that the switch
supports it.

My 2 cents.
-- 
Florian
--
To unsubscribe from this list: send the line "unsubscribe netdev" in

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ