lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A022666.4060906@netxen.com>
Date:	Wed, 6 May 2009 17:08:06 -0700
From:	Dhananjay Phadke <dhananjay@...xen.com>
To:	David Miller <davem@...emloft.net>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [patch next 4/4] netxen: disable rss for GbE ports


David Miller wrote:
> Thanks for ignoring my email.
> 
> I'll say it again, maybe you'll listen this time.
> 
> If a user has very cpu intensive netfilter or routing
> rules installed, the RSS flow seperation to different
> CPUs can help even at 1GB speeds.
> 
> Therefore, your change will introduce performance regressions.

I got your point, but there's another reason I have put
forward. With four (not even 10) 4-port netxen NICs
installed, they will consume 64 or more msi-x vectors.
When all this for only 1Gbps per port, restricted by
physical media speed.

This is only per port accounting, when it has to be
per pci function. The virtual NICs have more than one
pci function per physical port (=> 32 vectors per card).

There are other ways to balance the load, like moving
tx ring clean up to separate (one) msi-x vector.

I do see a reason for conserving msix vectors, unless
they are bringing performance gain.

Middle ground can be using 2 vectors per pci func, instead
of 4 (which has been tested to benefit 10Gbps NICs).

Call it my opinion, but may be it's necessary if system has
limited msi vectors.

-Dhananjay
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ