lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 07 Oct 2015 18:54:49 +0800
From:	Bob Liu <bob.liu@...cle.com>
To:	Roger Pau Monné <roger.pau@...rix.com>
CC:	xen-devel@...ts.xen.org, david.vrabel@...rix.com,
	linux-kernel@...r.kernel.org, konrad.wilk@...cle.com,
	felipe.franciosi@...rix.com, axboe@...com, hch@...radead.org,
	avanzini.arianna@...il.com, rafal.mielniczuk@...rix.com,
	boris.ostrovsky@...cle.com, jonathan.davies@...rix.com
Subject: Re: [PATCH v3 9/9] xen/blkback: get number of hardware queues/rings
 from blkfront


On 10/05/2015 11:15 PM, Roger Pau Monné wrote:
> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>> Backend advertises "multi-queue-max-queues" to front, and then read back the
>> final negotiated queues/rings from "multi-queue-num-queues" which is wrote by
>> blkfront.
>>
>> Signed-off-by: Bob Liu <bob.liu@...cle.com>
>> ---
>>  drivers/block/xen-blkback/blkback.c |    8 ++++++++
>>  drivers/block/xen-blkback/xenbus.c  |   36 ++++++++++++++++++++++++++++++-----
>>  2 files changed, 39 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
>> index fd02240..b904fe05f0 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -83,6 +83,11 @@ module_param_named(max_persistent_grants, xen_blkif_max_pgrants, int, 0644);
>>  MODULE_PARM_DESC(max_persistent_grants,
>>                   "Maximum number of grants to map persistently");
>>  
>> +unsigned int xenblk_max_queues;
>> +module_param_named(max_queues, xenblk_max_queues, uint, 0644);
>> +MODULE_PARM_DESC(max_queues,
>> +		 "Maximum number of hardware queues per virtual disk");
>> +
>>  /*
>>   * Maximum order of pages to be used for the shared ring between front and
>>   * backend, 4KB page granularity is used.
>> @@ -1458,6 +1463,9 @@ static int __init xen_blkif_init(void)
>>  		xen_blkif_max_ring_order = XENBUS_MAX_RING_PAGE_ORDER;
>>  	}
>>  
>> +	/* Allow as many queues as there are CPUs, by default */
>> +	xenblk_max_queues = num_online_cpus();
> 
> Hm, I'm not sure of the best way to set a default value for this.
> Consider for example a scenario were Dom0 is limited to 2vCPUs, but DomU
> has 8 vCPUs. Are we going to limit the number of queues to two? Is that
> the most appropriate value from a performance PoV?
> 
> I have to admit I don't have a clear idea of a default value for this
> field, and maybe the number of CPUs on the backend is indeed what works
> better, but there needs to be a comment explaining the reasoning behind
> this setting.
> 

It looks like xen-netback also chose num online cpus as the default value.
Anyway, that's not a big problem and can be fixed easily in future.

Thanks again for reviewing this big patch set!

Regards,
-Bob

>>  	rc = xen_blkif_interface_init();
>>  	if (rc)
>>  		goto failed_init;
>> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
>> index 04b8d0d..aa97ea5 100644
>> --- a/drivers/block/xen-blkback/xenbus.c
>> +++ b/drivers/block/xen-blkback/xenbus.c
>> @@ -28,6 +28,8 @@
>>  #define RINGREF_NAME_LEN (20)
>>  #define RINGREF_NAME_LEN (20)
>>  
>> +extern unsigned int xenblk_max_queues;
> 
> This should live in blkback/common.h
> 
>> +
>>  struct backend_info {
>>  	struct xenbus_device	*dev;
>>  	struct xen_blkif	*blkif;
>> @@ -191,11 +193,6 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
>>  	atomic_set(&blkif->drain, 0);
>>  	INIT_WORK(&blkif->free_work, xen_blkif_deferred_free);
>>  
>> -	blkif->nr_rings = 1;
>> -	if (xen_blkif_alloc_rings(blkif)) {
>> -		kmem_cache_free(xen_blkif_cachep, blkif);
>> -		return ERR_PTR(-ENOMEM);
>> -	}
>>  	return blkif;
>>  }
>>  
>> @@ -618,6 +615,14 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
>>  		goto fail;
>>  	}
>>  
>> +	/* Multi-queue: wrte how many queues backend supported. */
>                         ^ write how many queues are supported by the
> backend.
>> +	err = xenbus_printf(XBT_NIL, dev->nodename,
>> +			    "multi-queue-max-queues", "%u", xenblk_max_queues);
>> +	if (err) {
>> +		pr_debug("Error writing multi-queue-num-queues\n");
>                 ^ pr_warn at least.
>> +		goto fail;
>> +	}
>> +
>>  	/* setup back pointer */
>>  	be->blkif->be = be;
>>  
>> @@ -1008,6 +1013,7 @@ static int connect_ring(struct backend_info *be)
>>  	char *xspath;
>>  	size_t xspathsize;
>>  	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
>> +	unsigned int requested_num_queues = 0;
>>  
>>  	pr_debug("%s %s\n", __func__, dev->otherend);
>>  
>> @@ -1035,6 +1041,26 @@ static int connect_ring(struct backend_info *be)
>>  	be->blkif->vbd.feature_gnt_persistent = pers_grants;
>>  	be->blkif->vbd.overflow_max_grants = 0;
>>  
>> +	/*
>> +	 * Read the number of hardware queus from frontend.
>                                        ^ queues
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend, "multi-queue-num-queues", "%u", &requested_num_queues);
>> +	if (err < 0) {
>> +		requested_num_queues = 1;
>> +	} else {
>> +		if (requested_num_queues > xenblk_max_queues
>> +		    || requested_num_queues == 0) {
>> +			/* buggy or malicious guest */
>> +			xenbus_dev_fatal(dev, err,
>> +					"guest requested %u queues, exceeding the maximum of %u.",
>> +					requested_num_queues, xenblk_max_queues);
>> +			return -1;
>> +		}
>> +	}
>> +	be->blkif->nr_rings = requested_num_queues;
>> +	if (xen_blkif_alloc_rings(be->blkif))
>> +		return -ENOMEM;
>> +
>>  	pr_info("nr_rings:%d protocol %d (%s) %s\n", be->blkif->nr_rings,
>>  		 be->blkif->blk_protocol, protocol,
>>  		 pers_grants ? "persistent grants" : "");
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists