lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 30 Jun 2014 11:34:22 +0300 From: Amir Vadai <amirv.mellanox@...il.com> To: Eric Dumazet <eric.dumazet@...il.com> CC: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org, Or Gerlitz <ogerlitz@...lanox.com>, Yevgeny Petrilin <yevgenyp@...lanox.com>, Thomas Gleixner <tglx@...utronix.de>, Ben Hutchings <ben@...adent.org.uk>, amira@...lanox.com, Yuval Atias <yuvala@...lanox.com> Subject: Re: [PATCH net V1 1/3] net/mlx4_en: Don't use irq_affinity_notifier to track changes in IRQ affinity map On 6/30/2014 9:41 AM, Eric Dumazet wrote: > On Sun, 2014-06-29 at 11:54 +0300, Amir Vadai wrote: >> IRQ affinity notifier can only have a single notifier - cpu_rmap >> notifier. Can't use it to track changes in IRQ affinity map. >> Detect IRQ affinity changes by comparing CPU to current IRQ affinity map >> during NAPI poll thread. > > ... > >> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c >> index 8be7483..ac3dead 100644 >> --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c >> +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c >> @@ -474,15 +474,9 @@ int mlx4_en_poll_tx_cq(struct napi_struct *napi, int budget) >> /* If we used up all the quota - we're probably not done yet... */ >> if (done < budget) { >> /* Done for now */ >> - cq->mcq.irq_affinity_change = false; >> napi_complete(napi); >> mlx4_en_arm_cq(priv, cq); >> return done; >> - } else if (unlikely(cq->mcq.irq_affinity_change)) { >> - cq->mcq.irq_affinity_change = false; >> - napi_complete(napi); >> - mlx4_en_arm_cq(priv, cq); >> - return 0; >> } >> return budget; >> } > > It seems nothing is done then for the TX side after this patch ? > > You might want to drain whole queue instead of limiting to a 'budget', > otherwise, a cpu might be stuck servicing (soft)irq for the TX > completion, even if irq affinities say otherwise. > TX completions are very quick compared to the skb preparation and sending. Which is not the case for RX completions. Because of that, it is very easy to reproduce the problem in RX flows, but we never had any report of that problem in the TX flow. I prefer not to spend time on the TX, since we plan to send a patch soon to use the same NAPI for both TX and RX. Amir -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists