lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1404119512.15139.70.camel@edumazet-glaptop2.roam.corp.google.com>
Date:	Mon, 30 Jun 2014 02:11:52 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	amirv@...lanox.com
Cc:	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
	Or Gerlitz <ogerlitz@...lanox.com>,
	Yevgeny Petrilin <yevgenyp@...lanox.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ben Hutchings <ben@...adent.org.uk>, amira@...lanox.com,
	Yuval Atias <yuvala@...lanox.com>
Subject: Re: [PATCH net V1 1/3] net/mlx4_en: Don't use irq_affinity_notifier
 to track changes in IRQ affinity map

On Mon, 2014-06-30 at 11:34 +0300, Amir Vadai wrote:

> TX completions are very quick compared to the skb preparation and
> sending. Which is not the case for RX completions. Because of that, it
> is very easy to reproduce the problem in RX flows, but we never had any
> report of that problem in the TX flow.

This is because reporters probably use same number of RX and TX queues.

With TCP Small queues, TX completions are not always quick, if thousands
of flows are active.

Some people hit the locked cpu when say one cpu has to drain 8 TX
queues, because 7 other cpus can continuously feed more packets 

> I prefer not to spend time on the TX, since we plan to send a patch soon
> to use the same NAPI for both TX and RX.

Thanks, this sounds great.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ