[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150731142706.GA11442@nathan3500-linux-VM>
Date: Fri, 31 Jul 2015 09:27:06 -0500
From: Nathan Sullivan <nathan.sullivan@...com>
To: David Miller <davem@...emloft.net>
Cc: f.fainelli@...il.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] net/phy: micrel: Reenable interrupts during resume
On Fri, Jul 31, 2015 at 12:22:04AM -0700, David Miller wrote:
> From: Nathan Sullivan <nathan.sullivan@...com>
> Date: Thu, 30 Jul 2015 18:09:05 -0500
>
> > On Thu, Jul 30, 2015 at 10:00:34AM -0700, David Miller wrote:
> >> From: Nathan Sullivan <nathan.sullivan@...com>
> >> Date: Thu, 30 Jul 2015 10:15:48 -0500
> >>
> >> > Changes for V2: Actually make sure it compiles this time.
> >>
> >> If V1 didn't compile, even for you, then I have a big problem.
> >>
> >> And that problem is that you didn't test this change at all.
> >
> > Sorry about that, I have tested it against 3.14, which is why I had
> > the older interrupt function in v1. On HEAD, the phy no longer
> > suspends when ethernet goes down on our hardware - I'm still working
> > on figuring out why. I'm also surprised no one noticed this behavior
> > before I did, but if the phy never goes into suspend you wouldn't.
>
> I think you should sort out the PHY suspending issue before we move
> forward with this patch.
I believe I found the issue, we are using this PHY with cadence macb as
the MAC. The driver currently turns off the management port in
macb_reset_hw, which we have stopped with a local change since our hardware
typically has multiple phys on one mdio bus. That also prevents phy suspend
from working correctly, since the bus goes down before the phy state machine
can stop the phy.
In our local patch, we have macb_reset_hw keep the mdio bus on if it's on
already. Does that sound like an acceptable fix to you?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists