[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250715171452.193ac348@kernel.org>
Date: Tue, 15 Jul 2025 17:14:52 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Mingming Cao <mmc@...ux.ibm.com>
Cc: netdev@...r.kernel.org, horms@...nel.org, bjking1@...ux.ibm.com,
haren@...ux.ibm.com, ricklind@...ux.ibm.com, davemarq@...ux.ibm.com
Subject: Re: [PATCH v3 net-next 1/2] ibmvnic: Use atomic64_t for queue stats
On Mon, 14 Jul 2025 13:35:06 -0400 Mingming Cao wrote:
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> index 92647e137cf8..79fdba4293a4 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -2314,9 +2314,17 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
> tx_buff = &tx_pool->tx_buff[index];
> adapter->netdev->stats.tx_packets--;
> adapter->netdev->stats.tx_bytes -= tx_buff->skb->len;
> - adapter->tx_stats_buffers[queue_num].batched_packets--;
> - adapter->tx_stats_buffers[queue_num].bytes -=
> - tx_buff->skb->len;
> + atomic64_dec(&adapter->tx_stats_buffers[queue_num].batched_packets);
> + if (atomic64_sub_return(tx_buff->skb->len,
> + &adapter->tx_stats_buffers[queue_num].bytes) < 0) {
> + atomic64_set(&adapter->tx_stats_buffers[queue_num].bytes, 0);
> + netdev_warn(adapter->netdev,
Any datapath print needs to be rate limited, otherwise it may flood
the logs.
> + "TX stats underflow on queue %u: bytes (%lld) < skb->len (%u),\n"
> + "clamping to 0\n",
> + queue_num,
> + atomic64_read(&adapter->tx_stats_buffers[queue_num].bytes),
> + tx_buff->skb->len);
> + }
> dev_kfree_skb_any(tx_buff->skb);
> tx_buff->skb = NULL;
> adapter->netdev->stats.tx_dropped++;
> @@ -2652,10 +2660,10 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> netdev->stats.tx_packets += tx_bpackets + tx_dpackets;
> adapter->tx_send_failed += tx_send_failed;
> adapter->tx_map_failed += tx_map_failed;
> - adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets;
> - adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets;
> - adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
> - adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
> + atomic64_add(tx_bpackets, &adapter->tx_stats_buffers[queue_num].batched_packets);
> + atomic64_add(tx_dpackets, &adapter->tx_stats_buffers[queue_num].direct_packets);
> + atomic64_add(tx_bytes, &adapter->tx_stats_buffers[queue_num].bytes);
> + atomic64_add(tx_dropped, &adapter->tx_stats_buffers[queue_num].dropped_packets);
Are atomics really cheap on your platform? Normally having these many
atomic ops per packet would bring performance concerns.
I assume queue accesses are already protected by some sort of a lock
so isn't it enough to make the stats per queue without making them
atomic?
--
pw-bot: cr
Powered by blists - more mailing lists