[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240701073519.GJ17134@kernel.org>
Date: Mon, 1 Jul 2024 08:35:19 +0100
From: Simon Horman <horms@...nel.org>
To: Roger Quadros <rogerq@...nel.org>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Siddharth Vadapalli <s-vadapalli@...com>,
Julien Panis <jpanis@...libre.com>, Andrew Lunn <andrew@...n.ch>,
srk@...com, vigneshr@...com, danishanwar@...com,
pekka Varis <p-varis@...com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-omap@...r.kernel.org
Subject: Re: [PATCH net-next v2 1/7] net: ethernet: ti: am65-cpsw: Introduce
multi queue Rx
On Fri, Jun 28, 2024 at 03:01:50PM +0300, Roger Quadros wrote:
> am65-cpsw can support up to 8 queues at Rx.
> Use a macro AM65_CPSW_MAX_RX_QUEUES to indicate that.
> As there is only one DMA channel for RX traffic, the
> 8 queues come as 8 flows in that channel.
>
> By default, we will start with 1 flow as defined by the
> macro AM65_CPSW_DEFAULT_RX_CHN_FLOWS.
>
> User can change the number of flows by ethtool like so
> 'ethtool -L ethx rx <N>'
>
> All traffic will still come on flow 0. To get traffic on
> different flows the Classifiers will need to be set up.
>
> Signed-off-by: Roger Quadros <rogerq@...nel.org>
Hi Roger,
The nit's below notwithstanding, this looks good to me.
Reviewed-by: Simon Horman <horms@...nel.org>
> ---
> drivers/net/ethernet/ti/am65-cpsw-ethtool.c | 59 +++--
> drivers/net/ethernet/ti/am65-cpsw-nuss.c | 358 ++++++++++++++++------------
> drivers/net/ethernet/ti/am65-cpsw-nuss.h | 35 +--
> 3 files changed, 272 insertions(+), 180 deletions(-)
>
> diff --git a/drivers/net/ethernet/ti/am65-cpsw-ethtool.c b/drivers/net/ethernet/ti/am65-cpsw-ethtool.c
> index a1d0935d1ebe..3106bf07eb1a 100644
> --- a/drivers/net/ethernet/ti/am65-cpsw-ethtool.c
> +++ b/drivers/net/ethernet/ti/am65-cpsw-ethtool.c
> @@ -429,7 +429,7 @@ static void am65_cpsw_get_channels(struct net_device *ndev,
>
> ch->max_rx = AM65_CPSW_MAX_RX_QUEUES;
> ch->max_tx = AM65_CPSW_MAX_TX_QUEUES;
> - ch->rx_count = AM65_CPSW_MAX_RX_QUEUES;
> + ch->rx_count = common->rx_ch_num_flows;
> ch->tx_count = common->tx_ch_num;
> }
>
> @@ -448,8 +448,9 @@ static int am65_cpsw_set_channels(struct net_device *ndev,
> return -EBUSY;
>
> am65_cpsw_nuss_remove_tx_chns(common);
> + am65_cpsw_nuss_remove_rx_chns(common);
>
> - return am65_cpsw_nuss_update_tx_chns(common, chs->tx_count);
> + return am65_cpsw_nuss_update_tx_rx_chns(common, chs->tx_count, chs->rx_count);
nit: This line could be trivially wrapped to be <= 80 columns wide,
as is still preferred by networking code. Likewise in a few
other places in this patch (set).
Flagged by checkpatch.pl --max-line-length=80
> }
>
> static void
> @@ -921,10 +922,12 @@ static int am65_cpsw_get_coalesce(struct net_device *ndev, struct ethtool_coales
> {
> struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
> struct am65_cpsw_tx_chn *tx_chn;
> + struct am65_cpsw_rx_flow *rx_flow;
nit: Please consider arranging local variables in reverse xmas tree order -
longest line to shortest - in networking code.
Edward Cree's tool can be helpful here:
https://github.com/ecree-solarflare/xmastree
>
> tx_chn = &common->tx_chns[0];
> + rx_flow = &common->rx_chns.flows[0];
>
> - coal->rx_coalesce_usecs = common->rx_pace_timeout / 1000;
> + coal->rx_coalesce_usecs = rx_flow->rx_pace_timeout / 1000;
> coal->tx_coalesce_usecs = tx_chn->tx_pace_timeout / 1000;
>
> return 0;
...
Powered by blists - more mailing lists