lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 May 2015 18:20:31 +0000
From:	KY Srinivasan <kys@...rosoft.com>
To:	David Miller <davem@...emloft.net>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"olaf@...fle.de" <olaf@...fle.de>,
	"apw@...onical.com" <apw@...onical.com>,
	"jasowang@...hat.com" <jasowang@...hat.com>
Subject: RE: [PATCH net-next 1/1] hv_netvsc: Properly size the vrss queues



> -----Original Message-----
> From: David Miller [mailto:davem@...emloft.net]
> Sent: Wednesday, May 27, 2015 11:13 AM
> To: KY Srinivasan
> Cc: netdev@...r.kernel.org; linux-kernel@...r.kernel.org;
> devel@...uxdriverproject.org; olaf@...fle.de; apw@...onical.com;
> jasowang@...hat.com
> Subject: Re: [PATCH net-next 1/1] hv_netvsc: Properly size the vrss queues
> 
> From: "K. Y. Srinivasan" <kys@...rosoft.com>
> Date: Tue, 26 May 2015 16:21:09 -0700
> 
> > The current algorithm for deciding on the number of VRSS channels is
> > not optimal since we open up the min of number of CPUs online and the
> > number of VRSS channels the host is offering. So on a 32 VCPU guest
> > we could potentially open 32 VRSS subchannels. Experimentation has
> > shown that it is best to limit the number of VRSS channels to the number
> > of CPUs within a NUMA node. As part of this work introduce a module
> > parameter to control the number of sub-channels we would open up as
> well.
> > Here is the new algorithm for deciding on the number of sub-channels we
> > would open up:
> >         1) Pick the minimum of what the host is offering and what the driver
> >            in the guest is specifying via the module parameter.
> >         2) Pick the minimum of (1) and the numbers of CPUs in the NUMA
> >            node the primary channel is bound to.
> >
> > Signed-off-by: K. Y. Srinivasan <kys@...rosoft.com>
> 
> No new module parameters, sorry.
> 
> You will have to make it such that this can be changed at run time,
> and use a generic run-time mechanism to configure this value that any
> driver can use.
> 
> I will not accept: "this is not possible" or "this is too hard" as a
> reason why you have to use a module parameter.
> 
> Settings that cannot be set at run time are painful for people who run
> large scale operations where resetting entire systems to change a
> setting is completely and utterly impractical.

Agreed; we are working on full ethtool support to address this very issue. The
module parameter that I introduced here was just a temporary solution until 
the  full ethtool support. I will get rid of the module parameter and resubmit this patch.

Regards,

K. Y 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ