lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2bc090db-7bc1-4810-80c7-61218fb49acf@redhat.com>
Date: Tue, 3 Sep 2024 15:31:32 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Jijie Shao <shaojijie@...wei.com>, davem@...emloft.net,
 edumazet@...gle.com, kuba@...nel.org
Cc: shenjian15@...wei.com, wangpeiyang1@...wei.com, liuyonglong@...wei.com,
 chenhao418@...wei.com, sudongming1@...wei.com, xujunsheng@...wei.com,
 shiyongbang@...wei.com, libaihan@...wei.com, andrew@...n.ch,
 jdamato@...tly.com, horms@...nel.org, jonathan.cameron@...wei.com,
 shameerali.kolothum.thodi@...wei.com, salil.mehta@...wei.com,
 netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V6 net-next 06/11] net: hibmcge: Implement .ndo_start_xmit
 function

On 8/30/24 14:15, Jijie Shao wrote:
> +netdev_tx_t hbg_net_start_xmit(struct sk_buff *skb, struct net_device *net_dev)
> +{
> +	struct hbg_ring *ring = netdev_get_tx_ring(net_dev);
> +	struct hbg_priv *priv = netdev_priv(net_dev);
> +	/* This smp_load_acquire() pairs with smp_store_release() in
> +	 * hbg_tx_buffer_recycle() called in tx interrupt handle process.
> +	 */
> +	u32 ntc = smp_load_acquire(&ring->ntc);
> +	struct hbg_buffer *buffer;
> +	struct hbg_tx_desc tx_desc;
> +	u32 ntu = ring->ntu;
> +
> +	if (unlikely(!hbg_nic_is_open(priv))) {
> +		dev_kfree_skb_any(skb);
> +		return NETDEV_TX_OK;
> +	}
> +
> +	if (unlikely(!skb->len ||
> +		     skb->len > hbg_spec_max_frame_len(priv, HBG_DIR_TX))) {
> +		dev_kfree_skb_any(skb);
> +		net_dev->stats.tx_errors++;
> +		return NETDEV_TX_OK;
> +	}
> +
> +	if (unlikely(hbg_queue_is_full(ntc, ntu, ring) ||
> +		     hbg_fifo_is_full(ring->priv, ring->dir))) {
> +		netif_stop_queue(net_dev);
> +		return NETDEV_TX_BUSY;
> +	}
> +
> +	buffer = &ring->queue[ntu];
> +	buffer->skb = skb;
> +	buffer->skb_len = skb->len;
> +	if (unlikely(hbg_dma_map(buffer))) {
> +		dev_kfree_skb_any(skb);
> +		return NETDEV_TX_OK;
> +	}
> +
> +	buffer->state = HBG_TX_STATE_START;
> +	hbg_init_tx_desc(buffer, &tx_desc);
> +	hbg_hw_set_tx_desc(priv, &tx_desc);
> +
> +	/* This smp_store_release() pairs with smp_load_acquire() in
> +	 * hbg_tx_buffer_recycle() called in tx interrupt handle process.
> +	 */
> +	smp_store_release(&ring->ntu, hbg_queue_next_prt(ntu, ring));

Here you should probably check for netif_txq_maybe_stop()

> +	net_dev->stats.tx_bytes += skb->len;
> +	net_dev->stats.tx_packets++;

Try to avoid 'dev->stats' usage. Instead you could use per napi stats 
accounting (no contention).

Side note: 'net_dev' is quite an unusual variable name for a network device.

Cheers,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ