lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240816144847.GA9829@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
Date: Fri, 16 Aug 2024 07:48:47 -0700
From: Erni Sri Satya Vennela <ernis@...ux.microsoft.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: kys@...rosoft.com, haiyangz@...rosoft.com, wei.liu@...nel.org,
	decui@...rosoft.com, davem@...emloft.net, edumazet@...gle.com,
	pabeni@...hat.com, linux-hyperv@...r.kernel.org,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	ernis@...rosoft.com
Subject: Re: [PATCH v2] net: netvsc: Update default VMBus channels

On Thu, Aug 15, 2024 at 09:08:56AM -0700, Jakub Kicinski wrote:
> On Wed, 14 Aug 2024 09:59:13 -0700 Erni Sri Satya Vennela wrote:
> > Change VMBus channels macro (VRSS_CHANNEL_DEFAULT) in
> > Linux netvsc from 8 to 16 to align with Azure Windows VM
> > and improve networking throughput.
> > 
> > For VMs having less than 16 vCPUS, the channels depend
> > on number of vCPUs. Between 16 to 32 vCPUs, the channels
> > default to VRSS_CHANNEL_DEFAULT. For greater than 32 vCPUs,
> > set the channels to number of physical cores / 2 as a way
> > to optimize CPU resource utilization and scale for high-end
> > processors with many cores.
> > Maximum number of channels are by default set to 64.
> > 
> > Based on this change the subchannel creation would change as follows:
> > 
> > -------------------------------------------------------------
> > |No. of vCPU	|dev_info->num_chn	|subchannel created |
> > -------------------------------------------------------------
> > |  0-16		|	16		|	vCPU	    |
> > | >16 & <=32	|	16		|	16          |
> > | >32 & <=128	|	vCPU/2		|	vCPU/2      |
> > | >128		|	vCPU/2		|	64          |
> > -------------------------------------------------------------
> > 
> > Performance tests showed significant improvement in throughput:
> > - 0.54% for 16 vCPUs
> > - 0.83% for 32 vCPUs
> > - 1.76% for 48 vCPUs
> > - 10.35% for 64 vCPUs
> > - 13.47% for 96 vCPUs
> 
> Is there anything that needs clarifying in my feedback on v1?
> 
> https://lore.kernel.org/all/20240807201857.445f9f95@kernel.org/
> 
> Ignoring maintainer feedback is known to result in angry outbursts.

I sincerely apologize for the miss on my part. I will make sure this
never happens again. As Haiyang mentioned, we were trying to use a
similar logic as in netif_get_num_default_rss_queues(), and trying to
make sure there are no potential regressions. I will work on Michael's
and  Haiyang's follow up comments. 
Please let us know your opinion on the same.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ