[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6a8bde60-1a76-4951-b20e-dd38d93b1918@kernel.org>
Date: Thu, 14 Dec 2023 15:50:53 +0200
From: Roger Quadros <rogerq@...nel.org>
To: Vladimir Oltean <vladimir.oltean@....com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, shuah@...nel.org, s-vadapalli@...com,
r-gunasekaran@...com, vigneshr@...com, srk@...com,
horms@...nel.org, p-varis@...com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 net-next 05/11] net: ethernet: am65-cpsw: cleanup
TAPRIO handling
On 14/12/2023 15:41, Vladimir Oltean wrote:
> On Thu, Dec 14, 2023 at 03:36:57PM +0200, Roger Quadros wrote:
>> Actually, this code is already present upstream. I'm only moving it around
>> in this patch.
>>
>> Based on the error message and looking at am65_cpsw_est_check_scheds() and
>> am65_cpsw_est_set_sched_list() which is called later in am65_cpsw_taprio_replace(),
>> both of which eventually call am65_est_cmd_ns_to_cnt() which expects valid link_speed,
>> my understanding is that the author intended to have a valid link_speed before
>> proceeding further.
>>
>> Although it seems netif_running() check isn't enough to have valid link_speed
>> as the link could still be down even if the netif is brought up.
>>
>> Another gap is that in am65_cpsw_est_link_up(), if link was down for more than 1 second
>> it just abruptly calls am65_cpsw_taprio_destroy().
>>
>> So I think we need to do the following to improve taprio support in this driver:
>> 1) accept taprio schedule irrespective of netif/link_speed status
>> 2) call pm_runtime_get()/put() before any device access regardless of netif/link_speed state
>> 3) on link_up when if have valid link_speed and taprio_schedule, apply it.
>> 4) on link_down, destroy the taprio schedule form the controller.
>>
>> But my concern is, this is a decent amount of work and I don't want to delay this series.
>> My original subject of this patch series was mpqrio/frame-preemption/coalescing. ;)
>>
>> Can we please defer taprio enhancement to a separate series? Thanks!
>
> Ok, sounds fair to have some further taprio clean-up scheduled for later.
> I would also add taprio_offload_get() to the list of improvements that
> could be made.
Noted. Thanks!
--
cheers,
-roger
Powered by blists - more mailing lists