lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN1PR0301MB07077F394C7EAB9BBAE50D14A00A0@BN1PR0301MB0707.namprd03.prod.outlook.com>
Date:	Tue, 24 Mar 2015 01:49:46 +0000
From:	KY Srinivasan <kys@...rosoft.com>
To:	Venkatesh Srinivas <venkateshs@...gle.com>
CC:	"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
	"Linux Kernel Developers List" <linux-kernel@...r.kernel.org>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"ohering@...e.com" <ohering@...e.com>,
	"James E.J. Bottomley" <jbottomley@...allels.com>,
	Christoph Hellwig <hch@...radead.org>,
	"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
	"apw@...onical.com" <apw@...onical.com>,
	"vkuznets@...hat.com" <vkuznets@...hat.com>,
	"jasowang@...hat.com" <jasowang@...hat.com>
Subject: RE: [PATCH RESEND 2/7] scsi: storvsc: Size the queue depth based on
 the ringbuffer size



> -----Original Message-----
> From: Venkatesh Srinivas [mailto:venkateshs@...gle.com]
> Sent: Monday, March 23, 2015 5:23 PM
> To: KY Srinivasan
> Cc: gregkh@...uxfoundation.org; Linux Kernel Developers List;
> devel@...uxdriverproject.org; ohering@...e.com; James E.J. Bottomley;
> Christoph Hellwig; linux-scsi@...r.kernel.org; apw@...onical.com;
> vkuznets@...hat.com; jasowang@...hat.com
> Subject: Re: [PATCH RESEND 2/7] scsi: storvsc: Size the queue depth based
> on the ringbuffer size
> 
> On Mon, Mar 23, 2015 at 2:06 PM, K. Y. Srinivasan <kys@...rosoft.com>
> wrote:
> > Size the queue depth based on the ringbuffer size. Also accomodate for
> the
> > fact that we could have multiple channels (ringbuffers) per adaptor.
> >
> > Signed-off-by: K. Y. Srinivasan <kys@...rosoft.com>
> > Reviewed-by: Long Li <longli@...rosoft.com>
> > ---
> >  drivers/scsi/storvsc_drv.c |   27 ++++++++++++++++-----------
> >  1 files changed, 16 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> > index 27fe850..5a12897 100644
> > --- a/drivers/scsi/storvsc_drv.c
> > +++ b/drivers/scsi/storvsc_drv.c
> > @@ -309,10 +309,15 @@ enum storvsc_request_type {
> >   */
> >
> >  static int storvsc_ringbuffer_size = (256 * PAGE_SIZE);
> > +static u32 max_outstanding_req_per_channel;
> > +
> > +static int storvsc_vcpus_per_sub_channel = 4;
> >
> >  module_param(storvsc_ringbuffer_size, int, S_IRUGO);
> >  MODULE_PARM_DESC(storvsc_ringbuffer_size, "Ring buffer size
> (bytes)");
> >
> > +module_param(storvsc_vcpus_per_sub_channel, int, S_IRUGO);
> > +MODULE_PARM_DESC(vcpus_per_sub_channel, "Ratio of VCPUs to
> subchannels");
> >  /*
> >   * Timeout in seconds for all devices managed by this driver.
> >   */
> > @@ -320,7 +325,6 @@ static int storvsc_timeout = 180;
> >
> >  static int msft_blist_flags = BLIST_TRY_VPD_PAGES;
> >
> > -#define STORVSC_MAX_IO_REQUESTS                                200
> >
> >  static void storvsc_on_channel_callback(void *context);
> >
> > @@ -1376,7 +1380,6 @@ static int storvsc_do_io(struct hv_device *device,
> >
> >  static int storvsc_device_configure(struct scsi_device *sdevice)
> >  {
> > -       scsi_change_queue_depth(sdevice, STORVSC_MAX_IO_REQUESTS);
> >
> >         blk_queue_max_segment_size(sdevice->request_queue,
> PAGE_SIZE);
> >
> > @@ -1646,7 +1649,6 @@ static struct scsi_host_template scsi_driver = {
> >         .eh_timed_out =         storvsc_eh_timed_out,
> >         .slave_configure =      storvsc_device_configure,
> >         .cmd_per_lun =          255,
> > -       .can_queue =
> STORVSC_MAX_IO_REQUESTS*STORVSC_MAX_TARGETS,
> >         .this_id =              -1,
> >         /* no use setting to 0 since ll_blk_rw reset it to 1 */
> >         /* currently 32 */
> > @@ -1686,6 +1688,7 @@ static int storvsc_probe(struct hv_device *device,
> >                         const struct hv_vmbus_device_id *dev_id)
> >  {
> >         int ret;
> > +       int num_cpus = num_online_cpus();
> >         struct Scsi_Host *host;
> >         struct hv_host_device *host_dev;
> >         bool dev_is_ide = ((dev_id->driver_data == IDE_GUID) ? true : false);
> > @@ -1694,6 +1697,7 @@ static int storvsc_probe(struct hv_device *device,
> >         int max_luns_per_target;
> >         int max_targets;
> >         int max_channels;
> > +       int max_sub_channels = 0;
> >
> >         /*
> >          * Based on the windows host we are running on,
> > @@ -1719,12 +1723,18 @@ static int storvsc_probe(struct hv_device
> *device,
> >                 max_luns_per_target = STORVSC_MAX_LUNS_PER_TARGET;
> >                 max_targets = STORVSC_MAX_TARGETS;
> >                 max_channels = STORVSC_MAX_CHANNELS;
> > +               /*
> > +                * On Windows8 and above, we support sub-channels for storage.
> > +                * The number of sub-channels offerred is based on the number
> of
> > +                * VCPUs in the guest.
> > +                */
> > +               max_sub_channels = (num_cpus /
> storvsc_vcpus_per_sub_channel);
> >                 break;
> >         }
> >
> > -       if (dev_id->driver_data == SFC_GUID)
> > -               scsi_driver.can_queue = (STORVSC_MAX_IO_REQUESTS *
> > -                                        STORVSC_FC_MAX_TARGETS);
> > +       scsi_driver.can_queue = (max_outstanding_req_per_channel *
> > +                                max_sub_channels + 1);
> > +
> 
> If num_online_cpus() returned 1 - 3, can_queue will be set to 1 I
> think. Is that desired?

can_ queue will be set  max_outstanding_req_per_channel in
this case. That is what is expected. We will always have the primary channel;
Additionally, if the guest has more than 4 VCPUs, the host will offer
Additional subchannels for each 4 VCPus in the guest. So for less than
4 VCPus in the guest, we will only have the primary channel.


Regards,

K. Y

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ