lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20230518170942.418109-2-anthony.l.nguyen@intel.com> Date: Thu, 18 May 2023 10:09:40 -0700 From: Tony Nguyen <anthony.l.nguyen@...el.com> To: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com, edumazet@...gle.com, netdev@...r.kernel.org Cc: Kurt Kanzenbach <kurt@...utronix.de>, anthony.l.nguyen@...el.com, maciej.fijalkowski@...el.com, magnus.karlsson@...el.com, ast@...nel.org, daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com, bpf@...r.kernel.org, sasha.neftin@...el.com, Naama Meir <naamax.meir@...ux.intel.com> Subject: [PATCH net-next 1/3] igc: Avoid transmit queue timeout for XDP From: Kurt Kanzenbach <kurt@...utronix.de> High XDP load triggers the netdev watchdog: |NETDEV WATCHDOG: enp3s0 (igc): transmit queue 2 timed out The reason is the Tx queue transmission start (txq->trans_start) is not updated in XDP code path. Therefore, add it for all XDP transmission functions. Signed-off-by: Kurt Kanzenbach <kurt@...utronix.de> Tested-by: Naama Meir <naamax.meir@...ux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com> --- drivers/net/ethernet/intel/igc/igc_main.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 38d113b48111..c5ef1edcf548 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -2411,6 +2411,8 @@ static int igc_xdp_xmit_back(struct igc_adapter *adapter, struct xdp_buff *xdp) nq = txring_txq(ring); __netif_tx_lock(nq, cpu); + /* Avoid transmit queue timeout since we share it with the slow path */ + txq_trans_cond_update(nq); res = igc_xdp_init_tx_descriptor(ring, xdpf); __netif_tx_unlock(nq); return res; @@ -2829,6 +2831,9 @@ static void igc_xdp_xmit_zc(struct igc_ring *ring) __netif_tx_lock(nq, cpu); + /* Avoid transmit queue timeout since we share it with the slow path */ + txq_trans_cond_update(nq); + budget = igc_desc_unused(ring); while (xsk_tx_peek_desc(pool, &xdp_desc) && budget--) { @@ -6354,6 +6359,9 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames, __netif_tx_lock(nq, cpu); + /* Avoid transmit queue timeout since we share it with the slow path */ + txq_trans_cond_update(nq); + drops = 0; for (i = 0; i < num_frames; i++) { int err; -- 2.38.1
Powered by blists - more mailing lists