[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fbd00642-ab9c-4573-95dc-abba064b0068@seco.com>
Date: Tue, 23 Jan 2024 16:09:52 -0500
From: Sean Anderson <sean.anderson@...o.com>
To: "Russell King (Oracle)" <linux@...linux.org.uk>
Cc: Landen.Chao@...iatek.com, UNGLinuxDriver@...rochip.com,
alexandre.belloni@...tlin.com, andrew@...n.ch,
angelogioacchino.delregno@...labora.com, arinc.unal@...nc9.com,
claudiu.manoil@....com, daniel@...rotopia.org, davem@...emloft.net,
dqfext@...il.com, edumazet@...gle.com, f.fainelli@...il.com,
hkallweit1@...il.com, kuba@...nel.org, linux-arm-kernel@...ts.infradead.org,
linux-mediatek@...ts.infradead.org, matthias.bgg@...il.com,
netdev@...r.kernel.org, olteanv@...il.com, pabeni@...hat.com,
sean.wang@...iatek.com
Subject: Re: [PATCH RFC net-next 03/14] net: phylink: add support for PCS link
change notifications
On 1/23/24 16:05, Russell King (Oracle) wrote:
> On Tue, Jan 23, 2024 at 03:33:57PM -0500, Sean Anderson wrote:
>> On 1/23/24 15:07, Russell King (Oracle) wrote:
>>> On Tue, Jan 23, 2024 at 02:46:15PM -0500, Sean Anderson wrote:
>>>> Hi Russell,
>>>>
>>>> Does there need to be any locking when calling
>>>> phylink_pcs_change? I noticed that you call it from threaded
>>>> IRQ context in [1]. Can that race with phylink_major_config?
>>>
>>> What kind of scenario are you thinking may require locking?
>>
>> Can't we at least get a spurious bounce? E.g.
>>
>> pcs_major_config() pcs_disable(old_pcs) /* masks IRQ */
>> old_pcs->phylink = NULL; new_pcs->phylink = pl; ...
>> pcs_enable(new_pcs) /* unmasks IRQ */ ...
>>
>> pcs_handle_irq(new_pcs) /* Link up IRQ */
>> phylink_pcs_change(new_pcs, true) phylink_run_resolve(pl)
>>
>> phylink_resolve(pl) /* Link up */
>
> By this time, old_pcs->phylink has been set to NULL as you mentioned
> above.
>
>> pcs_handle_irq(old_pcs) /* Link down IRQ (pending from before
>> pcs_disable) */ phylink_pcs_change(old_pcs, false)
>> phylink_run_resolve(pl) /* Doesn't see the NULL */
>
> So here, phylink_pcs_change(old_pcs, ...) will read old_pcs->phylink
> and find that it's NULL, and do nothing.
This can happen on another CPU. There are no memory barriers on the read
side (until queue_work), so there's no guarantee that other CPUs will
see the write.
--Sean
>>> I guess the possibility would be if pcs->phylink changes and the
>>> compiler reads it multiple times - READ_ONCE() should solve
>>> that.
>>>
>>> However, in terms of the mechanics, there's no race.
>>>
>>> During the initial bringup, the resolve worker isn't started
>>> until after phylink_major_config() has completed (it's started
>>> at phylink_enable_and_run_resolve().) So, if
>>> phylink_pcs_change() gets called while in phylink_major_config()
>>> there, it'll see that pl->phylink_disable_state is non-zero, and
>>> won't queue the work.
>>>
>>> The next one is within the worker itself - and there can only be
>>> one instance of the worker running in totality. So, if
>>> phylink_pcs_change() gets called while phylink_major_config() is
>>> running from this path, the only thing it'll do is re-schedule
>>> the resolve worker to run another iteration which is harmless
>>> (whether or not the PCS is still current.)
>>>
>>> The last case is phylink_ethtool_ksettings_set(). This runs
>>> under the state_mutex, which locks out the resolve worker (since
>>> it also takes that mutex).
>>>
>>> So calling phylink_pcs_change() should be pretty harmless
>>> _unless_ the compiler re-reads pcs->phylink multiple times
>>> inside phylink_pcs_change(), which I suppose with modern
>>> compilers is possible. Hence my suggestion above about
>>> READ_ONCE() for that.
>>>
>>> Have you encountered an OOPS because pcs->phylink has become
>>> NULL? Or have you spotted another issue?
>>
>> I was looking at extending this code, and I was wondering if I
>> needed to e.g. take RTNL first. Thanks for the quick response.
>
> Note that phylink_mac_change() gets called in irq context, so this
> stuff can't take any mutexes or the rtnl. It is also intended that
> phylink_pcs_change() is similarly callable in irq context.
>
Powered by blists - more mailing lists