lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Jan 2014 10:27:02 +0000
From:	Yuval Mintz <yuvalmin@...adcom.com>
To:	Or Gerlitz <ogerlitz@...lanox.com>,
	Or Gerlitz <or.gerlitz@...il.com>
CC:	Amir Vadai <amirv@...lanox.com>,
	"David S. Miller" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Eugenia Emantayev <eugenia@...lanox.com>,
	Ido Shamay <idos@...lanox.com>
Subject: RE: [PATCH net-next 2/2] net/mlx4: Revert "mlx4: set maximal number
 of default RSS queues"

> >> Going back to your original commit 16917b87a "net-next: Add
> >> netif_get_num_default_rss_queues" I am still not clear why we want
> >>
> >> 1. why we want a common default to all MQ devices?
> > Although networking benefits from multiple Interrupt vectors
> > (enabling more rings, better performance, etc.), bounding this
> > number only to the number of cpus is unreasonable as it strains
> > system resources; e.g., consider a 40-cpu server - we might wish
> > to have 40 vectors per device, but that means that connecting
> > several devices to the same server might cause other functions
> > to fail probe as they will no longer be able to acquire interrupt
> > vectors of their own.
> 
> Modern servers which have tens of CPUs typically have thousands of MSI-X
> vectors which means you should be easily able to plug four cards into a
> server with 64 cores which will consume 256 out of the 1-4K vectors out
> there. Anyway, let me continue your approach - how about raising the
> default hard limit to 16 or having it as the number of cores @ the numa
> node where the card is plugged?

I think an additional issue was memory consumption -
additional interrupts --> additional allocated memory (for Rx rings).
And I do know the issues were real - we've had complains about devices
failing to load due to lack of resources (not all servers in the world are
top of the art).

Anyway, I believe 8/16 are simply strict limitations without any true meaning;
To judge what's more important, default `slimness' or default performance
is beyond me.
Perhaps the numa approach will prove beneficial (and will make some sense).

Thanks,
Yuval
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists