lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CF9D1877D81D214CB0CA0669EFAE020C26B83E12@CMEXMB1.ad.emulex.com>
Date:	Wed, 15 Jan 2014 12:46:09 +0000
From:	Sathya Perla <Sathya.Perla@...lex.Com>
To:	Ido Shamai <idos@....mellanox.co.il>,
	Yuval Mintz <yuvalmin@...adcom.com>,
	Or Gerlitz <ogerlitz@...lanox.com>,
	Or Gerlitz <or.gerlitz@...il.com>
CC:	Amir Vadai <amirv@...lanox.com>,
	"David S. Miller" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Eugenia Emantayev <eugenia@...lanox.com>,
	Ido Shamay <idos@...lanox.com>
Subject: RE: [PATCH net-next 2/2] net/mlx4: Revert "mlx4: set maximal number
 of default RSS queues"

> -----Original Message-----
> From: netdev-owner@...r.kernel.org [mailto:netdev-owner@...r.kernel.org] On Behalf
> Of Ido Shamai
> 
> On 1/2/2014 12:27 PM, Yuval Mintz wrote:
> >>>> Going back to your original commit 16917b87a "net-next: Add
> >>>> netif_get_num_default_rss_queues" I am still not clear why we want
> >>>>
> >>>> 1. why we want a common default to all MQ devices?
> >>> Although networking benefits from multiple Interrupt vectors
> >>> (enabling more rings, better performance, etc.), bounding this
> >>> number only to the number of cpus is unreasonable as it strains
> >>> system resources; e.g., consider a 40-cpu server - we might wish
> >>> to have 40 vectors per device, but that means that connecting
> >>> several devices to the same server might cause other functions
> >>> to fail probe as they will no longer be able to acquire interrupt
> >>> vectors of their own.
> >>
> >> Modern servers which have tens of CPUs typically have thousands of MSI-X
> >> vectors which means you should be easily able to plug four cards into a
> >> server with 64 cores which will consume 256 out of the 1-4K vectors out
> >> there. Anyway, let me continue your approach - how about raising the
> >> default hard limit to 16 or having it as the number of cores @ the numa
> >> node where the card is plugged?
> >
> > I think an additional issue was memory consumption -
> > additional interrupts --> additional allocated memory (for Rx rings).
> > And I do know the issues were real - we've had complains about devices
> > failing to load due to lack of resources (not all servers in the world are
> > top of the art).
> >
> > Anyway, I believe 8/16 are simply strict limitations without any true meaning;
> > To judge what's more important, default `slimness' or default performance
> > is beyond me.
> > Perhaps the numa approach will prove beneficial (and will make some sense).
> 
> After reviewing all that was said, I feel there is no need to enforce
> vendors with this strict limitation without any true meaning.
> 
> The reverted commit you applied forces the driver to use 8 rings at max
> at all time, without the possibility to change in flight using ethtool,
> as it's enforced on the PCI driver at module init (restarting the en
> driver with different of requested rings will not affect).
> So it's crucial for performance oriented applications using mlx4_en.

The number of RSS/RX rings used by a driver can be increased (up to the HW supported value)
at runtime using set-channels ethtool interface.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ