lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <YkL0wfgyCq5s8vdu@boxer> Date: Tue, 29 Mar 2022 14:00:01 +0200 From: Maciej Fijalkowski <maciej.fijalkowski@...el.com> To: Ivan Vecera <ivecera@...hat.com> Cc: netdev@...r.kernel.org, poros@...hat.com, mschmidt@...hat.com, Jesse Brandeburg <jesse.brandeburg@...el.com>, Tony Nguyen <anthony.l.nguyen@...el.com>, "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, Jesper Dangaard Brouer <hawk@...nel.org>, John Fastabend <john.fastabend@...il.com>, Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <kafai@...com>, Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>, KP Singh <kpsingh@...nel.org>, Jeff Kirsher <jeffrey.t.kirsher@...el.com>, Krzysztof Kazimierczak <krzysztof.kazimierczak@...el.com>, Alexander Lobakin <alexandr.lobakin@...el.com>, "moderated list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>, open list <linux-kernel@...r.kernel.org>, "open list:XDP (eXpress Data Path)" <bpf@...r.kernel.org> Subject: Re: [PATCH net] ice: Fix logic of getting XSK pool associated with Tx queue On Tue, Mar 29, 2022 at 12:27:51PM +0200, Ivan Vecera wrote: > Function ice_tx_xsk_pool() used to get XSK buffer pool associated > with XDP Tx queue returns NULL when number of ordinary Tx queues > is not equal to num_possible_cpus(). > > The function computes XDP Tx queue ID as an expression > `ring->q_index - vsi->num_xdp_txq` but this is wrong because > XDP Tx queues are placed after ordinary ones so the correct > formula is `ring->q_index - vsi->alloc_txq`. > > Prior commit 792b2086584f ("ice: fix vsi->txq_map sizing") number > of XDP Tx queues was equal to number of ordinary Tx queues so > the bug in mentioned function was hidden. > > Reproducer: > host# ethtool -L ens7f0 combined 1 > host# ./xdpsock -i ens7f0 -q 0 -t -N > samples/bpf/xdpsock_user.c:kick_tx:794: errno: 6/"No such device or address" > > sock0@...7f0:0 txonly xdp-drv > pps pkts 0.00 > rx 0 0 > tx 0 0 > > Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") > Fixes: 792b2086584f ("ice: fix vsi->txq_map sizing") > Signed-off-by: Ivan Vecera <ivecera@...hat.com> Thanks for this fix! I did exactly the same patch yesterday and it's already applied to bpf tree: https://lore.kernel.org/bpf/20220328142123.170157-5-maciej.fijalkowski@intel.com/T/#u Maciej > --- > drivers/net/ethernet/intel/ice/ice.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h > index b0b27bfcd7a2..d4f1874df7d0 100644 > --- a/drivers/net/ethernet/intel/ice/ice.h > +++ b/drivers/net/ethernet/intel/ice/ice.h > @@ -710,7 +710,7 @@ static inline struct xsk_buff_pool *ice_tx_xsk_pool(struct ice_tx_ring *ring) > struct ice_vsi *vsi = ring->vsi; > u16 qid; > > - qid = ring->q_index - vsi->num_xdp_txq; > + qid = ring->q_index - vsi->alloc_txq; > > if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps)) > return NULL; > -- > 2.34.1 >
Powered by blists - more mailing lists