lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 22 Sep 2014 09:40:30 -0700 From: Raghuram Kothakota <Raghuram.Kothakota@...cle.com> To: David L Stevens <david.stevens@...cle.com> Cc: David Miller <davem@...emloft.net>, netdev@...r.kernel.org Subject: Re: [PATCHv6 net-next 1/3] sunvnet: upgrade to VIO protocol version 1.6 On Sep 21, 2014, at 9:40 PM, David L Stevens <david.stevens@...cle.com> wrote: > > >> On 09/18/2014 02:49 PM, Raghuram Kothakota wrote: > >>> In the virtualization world, we want resources to be efficiently used and memory is >>> still very important resource. My concern is mostly because this memory usage of >>> 32+MB is on a per LDC basis. LDoms today supports a max of 128 domains, but >>> from my experience seen actual deployments of the order of 50 domains. This is >>> going up as the platforms getting more and more powerful. If there are really >>> that many peers, then the amount of memory consumed by one vnet instance >>> is 50 * 32+MB = 1.6GB+. It's fine if this memory is really used, but it seems like this >>> will be useful only when the peer is another linux guest with this version of vnet and >>> also the MTU is configured to use 64K. The memory is being wasted for all other >>> peers that either don't support 64K MTU or not configured to use it and also >>> the switch port as obviously it doesn't support 64K MTU today. > > I think I have a solution for this -- I'm doing some experimenting, but it may be a few days. > > However, fundamentally, the problem is that there are n^2-n links both ways, so 50 LDOMs on the same vswitch > will always be 2450X the resources of a single pair, and lead to scary aggregate numbers. Large installations > really need more vswitches with fewer LDOMs per switch, at least with the current code. My example is certainly an extreme case, we introduced an option to disable these inter-vnet-links mainly due to this explosion of LDC usage. We advise customers to disable the inter-vnet-links when they see the need to create a large number of vnets in a given vswtich, typically it is the case with management networks. We are also looking to automatically disabling these links when we detect more vnets(probably >16) in a given vswitch. -Raghuram > > +-DLS -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists