[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iLPYoOjMxNjBVHY7GwPFBGuxwRoM9gZZ-fWUUYFYjM1Uw@mail.gmail.com>
Date: Fri, 31 May 2024 14:19:44 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Yunshui Jiang <jiangyunshui@...inos.cn>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org, davem@...emloft.net,
kuba@...nel.org, pabeni@...hat.com
Subject: Re: [PATCH] net: caif: use DEV_STATS_INC() and DEV_STATS_ADD()
On Fri, May 31, 2024 at 1:40 PM Yunshui Jiang <jiangyunshui@...inoscn> wrote:
>
> CAIF devices update their dev->stats fields locklessly.
I disagree.
chnl_net_start_xmit() seems to be called while the txq spinlock is held,
so your patch is not needed in TX path.
Look for spin_lock(&txq->_xmit_lock), called from HARD_TX_LOCK()
I can not yet comment for the receiving side, can you add evidence to
your claim ?
> Therefore
> these counters should be updated atomically. Adopt SMP safe DEV_STATS_INC()
> and DEV_STATS_ADD() to achieve this.
>
> Signed-off-by: Yunshui Jiang <jiangyunshui@...inos.cn>
> ---
> net/caif/chnl_net.c | 16 ++++++++--------
> 1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/net/caif/chnl_net.c b/net/caif/chnl_net.c
> index 47901bd4def1..376f5abba88d 100644
> --- a/net/caif/chnl_net.c
> +++ b/net/caif/chnl_net.c
> @@ -90,7 +90,7 @@ static int chnl_recv_cb(struct cflayer *layr, struct cfpkt *pkt)
> break;
> default:
> kfree_skb(skb);
> - priv->netdev->stats.rx_errors++;
> + DEV_STATS_INC(priv->netdev, rx_errors);
> return -EINVAL;
> }
>
> @@ -103,8 +103,8 @@ static int chnl_recv_cb(struct cflayer *layr, struct cfpkt *pkt)
> netif_rx(skb);
>
> /* Update statistics. */
> - priv->netdev->stats.rx_packets++;
> - priv->netdev->stats.rx_bytes += pktlen;
> + DEV_STATS_INC(priv->netdev, rx_packets);
> + DEV_STATS_ADD(priv->netdev, rx_bytes, pktlen);
>
> return 0;
> }
> @@ -206,14 +206,14 @@ static netdev_tx_t chnl_net_start_xmit(struct sk_buff *skb,
> if (skb->len > priv->netdev->mtu) {
> pr_warn("Size of skb exceeded MTU\n");
> kfree_skb(skb);
> - dev->stats.tx_errors++;
> + DEV_STATS_INC(dev, tx_errors);
> return NETDEV_TX_OK;
> }
>
> if (!priv->flowenabled) {
> pr_debug("dropping packets flow off\n");
> kfree_skb(skb);
> - dev->stats.tx_dropped++;
> + DEV_STATS_INC(dev, tx_dropped);
> return NETDEV_TX_OK;
> }
>
> @@ -228,13 +228,13 @@ static netdev_tx_t chnl_net_start_xmit(struct sk_buff *skb,
> /* Send the packet down the stack. */
> result = priv->chnl.dn->transmit(priv->chnl.dn, pkt);
> if (result) {
> - dev->stats.tx_dropped++;
> + DEV_STATS_INC(dev, tx_dropped);
> return NETDEV_TX_OK;
> }
>
> /* Update statistics. */
> - dev->stats.tx_packets++;
> - dev->stats.tx_bytes += len;
> + DEV_STATS_INC(dev, tx_packets);
> + DEV_STATS_ADD(dev, tx_bytes, len);
>
> return NETDEV_TX_OK;
> }
> --
> 2.34.1
>
Powered by blists - more mailing lists