lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9538e649-0e9c-45b7-a06f-d4e8250635a6@intel.com>
Date: Fri, 1 Aug 2025 15:11:38 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Simon Horman <horms@...nel.org>
CC: <intel-wired-lan@...ts.osuosl.org>, Michal Kubiak
	<michal.kubiak@...el.com>, Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
	Tony Nguyen <anthony.l.nguyen@...el.com>, Przemek Kitszel
	<przemyslaw.kitszel@...el.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David
 S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, "Jakub
 Kicinski" <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, "Alexei
 Starovoitov" <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
	<nxne.cnse.osdt.itp.upstreaming@...el.com>, <bpf@...r.kernel.org>,
	<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH iwl-next v3 16/18] idpf: add support for XDP on Rx

From: Simon Horman <horms@...nel.org>
Date: Thu, 31 Jul 2025 14:35:57 +0100

> On Wed, Jul 30, 2025 at 06:07:15PM +0200, Alexander Lobakin wrote:
>> Use libeth XDP infra to support running XDP program on Rx polling.
>> This includes all of the possible verdicts/actions.
>> XDP Tx queues are cleaned only in "lazy" mode when there are less than
>> 1/4 free descriptors left on the ring. libeth helper macros to define
>> driver-specific XDP functions make sure the compiler could uninline
>> them when needed.
>> Use __LIBETH_WORD_ACCESS to parse descriptors more efficiently when
>> applicable. It really gives some good boosts and code size reduction
>> on x86_64.
>>
>> Co-developed-by: Michal Kubiak <michal.kubiak@...el.com>
>> Signed-off-by: Michal Kubiak <michal.kubiak@...el.com>
>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> 
> ...
> 
>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> 
> ...
> 
>> @@ -3127,14 +3125,12 @@ static bool idpf_rx_process_skb_fields(struct sk_buff *skb,
>>  	return !__idpf_rx_process_skb_fields(rxq, skb, xdp->desc);
>>  }
>>  
>> -static void
>> -idpf_xdp_run_pass(struct libeth_xdp_buff *xdp, struct napi_struct *napi,
>> -		  struct libeth_rq_napi_stats *ss,
>> -		  const struct virtchnl2_rx_flex_desc_adv_nic_3 *desc)
>> -{
>> -	libeth_xdp_run_pass(xdp, NULL, napi, ss, desc, NULL,
>> -			    idpf_rx_process_skb_fields);
>> -}
>> +LIBETH_XDP_DEFINE_START();
>> +LIBETH_XDP_DEFINE_RUN(static idpf_xdp_run_pass, idpf_xdp_run_prog,
>> +		      idpf_xdp_tx_flush_bulk, idpf_rx_process_skb_fields);
>> +LIBETH_XDP_DEFINE_FINALIZE(static idpf_xdp_finalize_rx, idpf_xdp_tx_flush_bulk,
>> +			   idpf_xdp_tx_finalize);
>> +LIBETH_XDP_DEFINE_END();
>>  
>>  /**
>>   * idpf_rx_hsplit_wa - handle header buffer overflows and split errors
>> @@ -3222,7 +3218,10 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
>>  	struct libeth_rq_napi_stats rs = { };
>>  	u16 ntc = rxq->next_to_clean;
>>  	LIBETH_XDP_ONSTACK_BUFF(xdp);
>> +	LIBETH_XDP_ONSTACK_BULK(bq);
>>  
>> +	libeth_xdp_tx_init_bulk(&bq, rxq->xdp_prog, rxq->xdp_rxq.dev,
>> +				rxq->xdpsqs, rxq->num_xdp_txq);
>>  	libeth_xdp_init_buff(xdp, &rxq->xdp, &rxq->xdp_rxq);
>>  
>>  	/* Process Rx packets bounded by budget */
>> @@ -3318,11 +3317,13 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
>>  		if (!idpf_rx_splitq_is_eop(rx_desc) || unlikely(!xdp->data))
>>  			continue;
>>  
>> -		idpf_xdp_run_pass(xdp, rxq->napi, &rs, rx_desc);
>> +		idpf_xdp_run_pass(xdp, &bq, rxq->napi, &rs, rx_desc);
>>  	}
>>  
>>  	rxq->next_to_clean = ntc;
>> +
>>  	libeth_xdp_save_buff(&rxq->xdp, xdp);
>> +	idpf_xdp_finalize_rx(&bq);
> 
> This will call __libeth_xdp_finalize_rx(), which calls rcu_read_unlock().
> But there doesn't seem to be a corresponding call to rcu_read_lock()
> 
> Flagged by Sparse.

It's false-positive, rcu_read_lock() is called in tx_init_bulk().

> 
>>  
>>  	u64_stats_update_begin(&rxq->stats_sync);
>>  	u64_stats_add(&rxq->q_stats.packets, rs.packets);

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ