lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 12 Feb 2021 16:35:16 +0000
From:   Michael Kelley <mikelley@...rosoft.com>
To:     melanieplageman <melanieplageman@...il.com>,
        "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>
CC:     "andres@...razel.de" <andres@...razel.de>,
        KY Srinivasan <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        "wei.liu@...nel.org" <wei.liu@...nel.org>,
        "jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
        "martin.petersen@...cle.com" <martin.petersen@...cle.com>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        melanieplageman <melanieplageman@...il.com>
Subject: RE: [PATCH v3] scsi: storvsc: Parameterize number hardware queues

From: Melanie Plageman <melanieplageman@...il.com> Sent: Thursday, February 11, 2021 3:18 PM
> 
> Add ability to set the number of hardware queues with new module parameter,
> storvsc_max_hw_queues. The default value remains the number of CPUs.  This
> functionality is useful in some environments (e.g. Microsoft Azure) where
> decreasing the number of hardware queues has been shown to improve
> performance.
> 
> Signed-off-by: Melanie Plageman (Microsoft) <melanieplageman@...il.com>
> ---
>  drivers/scsi/storvsc_drv.c | 28 ++++++++++++++++++++++++++--
>  1 file changed, 26 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> index 2e4fa77445fd..a64e6664c915 100644
> --- a/drivers/scsi/storvsc_drv.c
> +++ b/drivers/scsi/storvsc_drv.c
> @@ -378,10 +378,14 @@ static u32 max_outstanding_req_per_channel;
>  static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth);
> 
>  static int storvsc_vcpus_per_sub_channel = 4;
> +static int storvsc_max_hw_queues = -1;
> 
>  module_param(storvsc_ringbuffer_size, int, S_IRUGO);
>  MODULE_PARM_DESC(storvsc_ringbuffer_size, "Ring buffer size (bytes)");
> 
> +module_param(storvsc_max_hw_queues, int, S_IRUGO|S_IWUSR);
> +MODULE_PARM_DESC(storvsc_max_hw_queues, "Maximum number of hardware
> queues");
> +

There's been an effort underway to not use the symbolic permissions in
module_param(), but to just use the octal digits (like 0600 for root only
access).   But I couldn't immediately find documentation on why this
change is being made.  And clearly it hasn't been applied to the
existing module_param() uses here in storvsc_drv.c.  But with this being
a new parameter, let's use the recommended octal digit format.

>  module_param(storvsc_vcpus_per_sub_channel, int, S_IRUGO);
>  MODULE_PARM_DESC(storvsc_vcpus_per_sub_channel, "Ratio of VCPUs to
> subchannels");
> 
> @@ -1897,6 +1901,7 @@ static int storvsc_probe(struct hv_device *device,
>  {
>  	int ret;
>  	int num_cpus = num_online_cpus();
> +	int num_present_cpus = num_present_cpus();
>  	struct Scsi_Host *host;
>  	struct hv_host_device *host_dev;
>  	bool dev_is_ide = ((dev_id->driver_data == IDE_GUID) ? true : false);
> @@ -2004,8 +2009,19 @@ static int storvsc_probe(struct hv_device *device,
>  	 * For non-IDE disks, the host supports multiple channels.
>  	 * Set the number of HW queues we are supporting.
>  	 */
> -	if (!dev_is_ide)
> -		host->nr_hw_queues = num_present_cpus();
> +	if (!dev_is_ide) {
> +		if (storvsc_max_hw_queues == -1)
> +			host->nr_hw_queues = num_present_cpus;
> +		else if (storvsc_max_hw_queues > num_present_cpus ||
> +			 storvsc_max_hw_queues == 0 ||
> +			storvsc_max_hw_queues < -1) {
> +			storvsc_log(device, STORVSC_LOGGING_WARN,
> +				"Resetting invalid storvsc_max_hw_queues value to default.\n");
> +			host->nr_hw_queues = num_present_cpus;
> +			storvsc_max_hw_queues = -1;
> +		} else
> +			host->nr_hw_queues = storvsc_max_hw_queues;
> +	}

I have a couple of thoughts about the above logic.  As the code is written,
valid values are integers from 1 to the number of CPUs, and -1.  The logic
would be simpler if the module parameter was an unsigned int instead of
a signed int, and zero was the marker for "use number of CPUs".  Then
you wouldn't have to check for negative values or have special handling
for -1.

Second, I think you can avoid intertwining the logic for checking for an
invalid value, and actually setting host->nr_hw_queues.  Check for an
invalid value first, then do the setting of host->hr_hw_queues.

Putting both thoughts together, you could get code like this:

	if (!dev_is ide) {
		if (storvsc_max_hw_queues > num_present_cpus) {
			storvsc_max_hw_queues = 0;
			storvsc_log(device, STORVSC_LOGGING_WARN,
				"Resetting invalid storvsc_max_hw_queues value to default.\n");
		}
		if (storvsc_max_hw_queues)
			host->nr_hw_queues = storvsc_max_hw_queues
		else
			host->hr_hw_queues = num_present_cpus;
	}

> 
>  	/*
>  	 * Set the error handler work queue.
> @@ -2169,6 +2185,14 @@ static int __init storvsc_drv_init(void)
>  		vmscsi_size_delta,
>  		sizeof(u64)));
> 
> +	if (storvsc_max_hw_queues > num_present_cpus() ||
> +		storvsc_max_hw_queues == 0 ||
> +		storvsc_max_hw_queues < -1) {
> +		pr_warn("Setting storvsc_max_hw_queues to -1. %d is invalid.\n",
> +			storvsc_max_hw_queues);
> +		storvsc_max_hw_queues = -1;
> +	}
> +

Is this check really needed?  Any usage of the value will be in
storvsc_probe() where the same check is performed.  I'm not seeing
a scenario where this check adds value over what's already being
done in storvsc_probe(), but maybe I'm missing it.

>  #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)
>  	fc_transport_template = fc_attach_transport(&fc_transport_functions);
>  	if (!fc_transport_template)
> --
> 2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ