lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210331144235.799dea32@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date:   Wed, 31 Mar 2021 14:42:35 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Ong Boon Leong <boon.leong.ong@...el.com>
Cc:     Giuseppe Cavallaro <peppe.cavallaro@...com>,
        Alexandre Torgue <alexandre.torgue@...com>,
        Jose Abreu <joabreu@...opsys.com>,
        "David S . Miller" <davem@...emloft.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        Maxime Coquelin <mcoquelin.stm32@...il.com>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        KP Singh <kpsingh@...nel.org>, netdev@...r.kernel.org,
        linux-stm32@...md-mailman.stormreply.com,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        bpf@...r.kernel.org
Subject: Re: [PATCH net-next v3 5/6] net: stmmac: Add support for XDP_TX
 action

On Wed, 31 Mar 2021 23:41:34 +0800 Ong Boon Leong wrote:
> This patch adds support for XDP_TX action which enables XDP program to
> transmit back received frames.
> 
> This patch has been tested with the "xdp2" app located in samples/bpf
> dir. The DUT receives burst traffic packet generated using pktgen script
> 'pktgen_sample03_burst_single_flow.sh'.
> 
> v3: Added 'nq->trans_start = jiffies' to avoid TX time-out as we are
>     sharing TX queue between slow path and XDP. Thanks to Jakub Kicinski
>     for pointing out.
> 
> Signed-off-by: Ong Boon Leong <boon.leong.ong@...el.com>

> +static int stmmac_xdp_xmit_back(struct stmmac_priv *priv,
> +				struct xdp_buff *xdp)
> +{
> +	struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
> +	int cpu = smp_processor_id();
> +	struct netdev_queue *nq;
> +	int queue;
> +	int res;
> +
> +	if (unlikely(!xdpf))
> +		return -EFAULT;

Can you return -EFAULT here? looks like the function is otherwise
returning positive STMMAC_XDP_* return codes/masks.

> +	queue = stmmac_xdp_get_tx_queue(priv, cpu);
> +	nq = netdev_get_tx_queue(priv->dev, queue);
> +
> +	__netif_tx_lock(nq, cpu);
> +	/* Avoids TX time-out as we are sharing with slow path */
> +	nq->trans_start = jiffies;
> +	res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf);
> +	if (res == STMMAC_XDP_TX) {
> +		stmmac_flush_tx_descriptors(priv, queue);
> +		stmmac_tx_timer_arm(priv, queue);

Would it make sense to arm the timer and flush descriptors at the end
of the NAPI poll cycle? Instead of after every TX frame?

> +	}
> +	__netif_tx_unlock(nq);
> +
> +	return res;
> +}

> @@ -4365,16 +4538,26 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>  			xdp.data_hard_start = page_address(buf->page);
>  			xdp_set_data_meta_invalid(&xdp);
>  			xdp.frame_sz = buf_sz;
> +			xdp.rxq = &rx_q->xdp_rxq;
>  
> +			pre_len = xdp.data_end - xdp.data_hard_start -
> +				  buf->page_offset;
>  			skb = stmmac_xdp_run_prog(priv, &xdp);
> +			/* Due xdp_adjust_tail: DMA sync for_device
> +			 * cover max len CPU touch
> +			 */
> +			sync_len = xdp.data_end - xdp.data_hard_start -
> +				   buf->page_offset;
> +			sync_len = max(sync_len, pre_len);
>  
>  			/* For Not XDP_PASS verdict */
>  			if (IS_ERR(skb)) {
>  				unsigned int xdp_res = -PTR_ERR(skb);
>  
>  				if (xdp_res & STMMAC_XDP_CONSUMED) {
> -					page_pool_recycle_direct(rx_q->page_pool,
> -								 buf->page);
> +					page_pool_put_page(rx_q->page_pool,
> +							   virt_to_head_page(xdp.data),
> +							   sync_len, true);

IMHO the dma_sync_size logic is a little question, but it's not really
related to your patch, others are already doing the same thing, so it's
fine, I guess.

>  					buf->page = NULL;
>  					priv->dev->stats.rx_dropped++;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ