[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150424.120142.515098054679955418.davem@davemloft.net>
Date: Fri, 24 Apr 2015 12:01:42 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: f.fainelli@...il.com
Cc: vivien.didelot@...oirfairelinux.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel@...oirfairelinux.com
Subject: Re: [PATCH] net: mdio-gpio: support access that may sleep
From: Florian Fainelli <f.fainelli@...il.com>
Date: Fri, 24 Apr 2015 08:56:34 -0700
> On 24/04/15 08:04, David Miller wrote:
>> From: Vivien Didelot <vivien.didelot@...oirfairelinux.com>
>> Date: Wed, 22 Apr 2015 13:06:54 -0400
>>
>>> Some systems using mdio-gpio may use gpio on message based busses, which
>>> require sleeping (e.g. gpio from an I2C I/O expander).
>>>
>>> Since this driver does not use IRQ handler, it is safe to use the
>>> _cansleep suffixed gpio accessors.
>>>
>>> Signed-off-by: Vivien Didelot <vivien.didelot@...oirfairelinux.com>
>>
>> Since this is down underneath the layer of an MII bus, you cannot
>> universally say that these routines are always called in a sleepable
>> context.
>>
>> The PHY layer, and the driver itself above that, might call these
>> routines from timers, interruptes etc.
>
> The PHY library calls these routines from its state machine workqueue
> for that reason, or from process context (when invoked via ethtool
> ioctl). The only special case is phy_mac_interrupt() which is callable
> from interrupt context, but schedules the state machine workqueue anyway
> to circumvent the "in-interrupt" context.
>
> If we were not doing that, there would be a number of things broken, for
> instance the per-MDIO bus mutex would not protect us from anything.
Does the link state polling timer use a workqueue in this manner as
well?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists