lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 18 Feb 2021 05:24:35 +0000
From:   Michael Kelley <mikelley@...rosoft.com>
To:     melanieplageman <melanieplageman@...il.com>
CC:     "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
        "andres@...razel.de" <andres@...razel.de>,
        KY Srinivasan <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        "wei.liu@...nel.org" <wei.liu@...nel.org>,
        "jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
        "martin.petersen@...cle.com" <martin.petersen@...cle.com>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v3] scsi: storvsc: Parameterize number hardware queues

From: Melanie Plageman <melanieplageman@...il.com> Sent: Wednesday, February 17, 2021 4:05 PM
> 
> On Fri, Feb 12, 2021 at 04:35:16PM +0000, Michael Kelley wrote:
> > From: Melanie Plageman <melanieplageman@...il.com> Sent: Thursday, February 11,
> 2021 3:18 PM
> > >
> > > Add ability to set the number of hardware queues with new module parameter,
> > > storvsc_max_hw_queues. The default value remains the number of CPUs.  This
> > > functionality is useful in some environments (e.g. Microsoft Azure) where
> > > decreasing the number of hardware queues has been shown to improve
> > > performance.
> > >
> > > Signed-off-by: Melanie Plageman (Microsoft) <melanieplageman@...il.com>
> > > ---
> > >  drivers/scsi/storvsc_drv.c | 28 ++++++++++++++++++++++++++--
> > >  1 file changed, 26 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> > > index 2e4fa77445fd..a64e6664c915 100644
> > > --- a/drivers/scsi/storvsc_drv.c
> > > +++ b/drivers/scsi/storvsc_drv.c
> > > @@ -378,10 +378,14 @@ static u32 max_outstanding_req_per_channel;
> > >  static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth);
> > >
> > >  static int storvsc_vcpus_per_sub_channel = 4;
> > > +static int storvsc_max_hw_queues = -1;
> > >
> > >  module_param(storvsc_ringbuffer_size, int, S_IRUGO);
> > >  MODULE_PARM_DESC(storvsc_ringbuffer_size, "Ring buffer size (bytes)");
> > >
> > > +module_param(storvsc_max_hw_queues, int, S_IRUGO|S_IWUSR);
> > > +MODULE_PARM_DESC(storvsc_max_hw_queues, "Maximum number of hardware
> > > queues");
> > > +
> >
> > There's been an effort underway to not use the symbolic permissions in
> > module_param(), but to just use the octal digits (like 0600 for root only
> > access).   But I couldn't immediately find documentation on why this
> > change is being made.  And clearly it hasn't been applied to the
> > existing module_param() uses here in storvsc_drv.c.  But with this being
> > a new parameter, let's use the recommended octal digit format.
> 
> Thanks. I will update this in v4.
> 
> >
> > >  module_param(storvsc_vcpus_per_sub_channel, int, S_IRUGO);
> > >  MODULE_PARM_DESC(storvsc_vcpus_per_sub_channel, "Ratio of VCPUs to
> > > subchannels");
> > >
> > > @@ -1897,6 +1901,7 @@ static int storvsc_probe(struct hv_device *device,
> > >  {
> > >  	int ret;
> > >  	int num_cpus = num_online_cpus();
> > > +	int num_present_cpus = num_present_cpus();
> > >  	struct Scsi_Host *host;
> > >  	struct hv_host_device *host_dev;
> > >  	bool dev_is_ide = ((dev_id->driver_data == IDE_GUID) ? true : false);
> > > @@ -2004,8 +2009,19 @@ static int storvsc_probe(struct hv_device *device,
> > >  	 * For non-IDE disks, the host supports multiple channels.
> > >  	 * Set the number of HW queues we are supporting.
> > >  	 */
> > > -	if (!dev_is_ide)
> > > -		host->nr_hw_queues = num_present_cpus();
> > > +	if (!dev_is_ide) {
> > > +		if (storvsc_max_hw_queues == -1)
> > > +			host->nr_hw_queues = num_present_cpus;
> > > +		else if (storvsc_max_hw_queues > num_present_cpus ||
> > > +			 storvsc_max_hw_queues == 0 ||
> > > +			storvsc_max_hw_queues < -1) {
> > > +			storvsc_log(device, STORVSC_LOGGING_WARN,
> > > +				"Resetting invalid storvsc_max_hw_queues value to default.\n");
> > > +			host->nr_hw_queues = num_present_cpus;
> > > +			storvsc_max_hw_queues = -1;
> > > +		} else
> > > +			host->nr_hw_queues = storvsc_max_hw_queues;
> > > +	}
> >
> > I have a couple of thoughts about the above logic.  As the code is written,
> > valid values are integers from 1 to the number of CPUs, and -1.  The logic
> > would be simpler if the module parameter was an unsigned int instead of
> > a signed int, and zero was the marker for "use number of CPUs".  Then
> > you wouldn't have to check for negative values or have special handling
> > for -1.
> 
> I used -1 because the linter ./scripts/checkpatch.pl throws an ERROR "do
> not initialise statics to 0"

OK, right.  The intent of that warning is not that using zero as a value
is bad.  The intent that to indicate that statics are by default initialized
to zero, so explicitly adding the "= 0" is unnecessary.  So feel free to
use "0" as the marker for "use numbers of CPUs".  Just don't
add the "= 0" in the variable declaration. :-)

> 
> >
> > Second, I think you can avoid intertwining the logic for checking for an
> > invalid value, and actually setting host->nr_hw_queues.  Check for an
> > invalid value first, then do the setting of host->hr_hw_queues.
> >
> > Putting both thoughts together, you could get code like this:
> >
> > 	if (!dev_is ide) {
> > 		if (storvsc_max_hw_queues > num_present_cpus) {
> > 			storvsc_max_hw_queues = 0;
> > 			storvsc_log(device, STORVSC_LOGGING_WARN,
> > 				"Resetting invalid storvsc_max_hw_queues value to default.\n");
> > 		}
> > 		if (storvsc_max_hw_queues)
> > 			host->nr_hw_queues = storvsc_max_hw_queues
> > 		else
> > 			host->hr_hw_queues = num_present_cpus;
> > 	}
> 
> I will update the logic like this.
> 
> >
> > >
> > >  	/*
> > >  	 * Set the error handler work queue.
> > > @@ -2169,6 +2185,14 @@ static int __init storvsc_drv_init(void)
> > >  		vmscsi_size_delta,
> > >  		sizeof(u64)));
> > >
> > > +	if (storvsc_max_hw_queues > num_present_cpus() ||
> > > +		storvsc_max_hw_queues == 0 ||
> > > +		storvsc_max_hw_queues < -1) {
> > > +		pr_warn("Setting storvsc_max_hw_queues to -1. %d is invalid.\n",
> > > +			storvsc_max_hw_queues);
> > > +		storvsc_max_hw_queues = -1;
> > > +	}
> > > +
> >
> > Is this check really needed?  Any usage of the value will be in
> > storvsc_probe() where the same check is performed.  I'm not seeing
> > a scenario where this check adds value over what's already being
> > done in storvsc_probe(), but maybe I'm missing it.
> 
> It is not. I had initially added it because I did not plan on making the
> parameter updatable and thought it would be better to only have one
> message about the invalid value instead of #device messages (from each
> probe()). But, after making it updatable, I had to add invalid value
> checking to storvsc_probe() anyway, so, this is definitely unneeded. If
> you specify the parameter at boot-time and this is here, you would only
> get one instance of the logging message (because it resets the value of
> storvsc_max_hw_queues to the default before probe() is called), but, I
> don't think that is worth it.

I agree.  And actually, if the code in storvsc_probe() "fixes" the bad
value, you would not get the warning on subsequent calls to
storvsc_probe().  So again, you would have only one warning unless
someone manually changed it to a bad value again via the /sys/module
interface.

Michael

> 
> >
> > >  #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)
> > >  	fc_transport_template = fc_attach_transport(&fc_transport_functions);
> > >  	if (!fc_transport_template)
> > > --
> > > 2.20.1
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ