lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Jul 2016 16:58:10 +0800
From:	Bob Liu <bob.liu@...cle.com>
To:	Roger Pau Monné <roger.pau@...rix.com>
CC:	linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org,
	konrad.wilk@...cle.com
Subject: Re: [PATCH v2 3/3] xen-blkfront: dynamic configuration of per-vbd
 resources


On 07/26/2016 04:44 PM, Roger Pau Monné wrote:
> On Tue, Jul 26, 2016 at 01:19:37PM +0800, Bob Liu wrote:
>> The current VBD layer reserves buffer space for each attached device based on
>> three statically configured settings which are read at boot time.
>>  * max_indirect_segs: Maximum amount of segments.
>>  * max_ring_page_order: Maximum order of pages to be used for the shared ring.
>>  * max_queues: Maximum of queues(rings) to be used.
>>
>> But the storage backend, workload, and guest memory result in very different
>> tuning requirements. It's impossible to centrally predict application
>> characteristics so it's best to leave allow the settings can be dynamiclly
>> adjusted based on workload inside the Guest.
>>
>> Usage:
>> Show current values:
>> cat /sys/devices/vbd-xxx/max_indirect_segs
>> cat /sys/devices/vbd-xxx/max_ring_page_order
>> cat /sys/devices/vbd-xxx/max_queues
>>
>> Write new values:
>> echo <new value> > /sys/devices/vbd-xxx/max_indirect_segs
>> echo <new value> > /sys/devices/vbd-xxx/max_ring_page_order
>> echo <new value> > /sys/devices/vbd-xxx/max_queues
>>
>> Signed-off-by: Bob Liu <bob.liu@...cle.com>
>> --
>> v2: Rename to max_ring_page_order and rm the waiting code suggested by Roger.
>> ---
>>  drivers/block/xen-blkfront.c |  275 +++++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 269 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>> index 1b4c380..ff5ebe5 100644
>> --- a/drivers/block/xen-blkfront.c
>> +++ b/drivers/block/xen-blkfront.c
>> @@ -212,6 +212,11 @@ struct blkfront_info
>>  	/* Save uncomplete reqs and bios for migration. */
>>  	struct list_head requests;
>>  	struct bio_list bio_list;
>> +	/* For dynamic configuration. */
>> +	unsigned int reconfiguring:1;
>> +	int new_max_indirect_segments;
> 
> Can't you just use max_indirect_segments? Is it really needed to introduce a 
> new struct member?
> 
>> +	int max_ring_page_order;
>> +	int max_queues;

Do you mean also get rid of these two new struct members?
I'll think about that.

>>  };
>>  
>>  static unsigned int nr_minors;
>> @@ -1350,6 +1355,31 @@ static void blkif_free(struct blkfront_info *info, int suspend)
>>  	for (i = 0; i < info->nr_rings; i++)
>>  		blkif_free_ring(&info->rinfo[i]);
>>  
>> +	/* Remove old xenstore nodes. */
>> +	if (info->nr_ring_pages > 1)
>> +		xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-page-order");
>> +
>> +	if (info->nr_rings == 1) {
>> +		if (info->nr_ring_pages == 1) {
>> +			xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-ref");
>> +		} else {
>> +			for (i = 0; i < info->nr_ring_pages; i++) {
>> +				char ring_ref_name[RINGREF_NAME_LEN];
>> +
>> +				snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
>> +				xenbus_rm(XBT_NIL, info->xbdev->nodename, ring_ref_name);
>> +			}
>> +		}
>> +	} else {
>> +		xenbus_rm(XBT_NIL, info->xbdev->nodename, "multi-queue-num-queues");
>> +
>> +		for (i = 0; i < info->nr_rings; i++) {
>> +			char queuename[QUEUE_NAME_LEN];
>> +
>> +			snprintf(queuename, QUEUE_NAME_LEN, "queue-%u", i);
>> +			xenbus_rm(XBT_NIL, info->xbdev->nodename, queuename);
>> +		}
>> +	}
>>  	kfree(info->rinfo);
>>  	info->rinfo = NULL;
>>  	info->nr_rings = 0;
>> @@ -1763,15 +1793,21 @@ static int talk_to_blkback(struct xenbus_device *dev,
>>  	const char *message = NULL;
>>  	struct xenbus_transaction xbt;
>>  	int err;
>> -	unsigned int i, max_page_order = 0;
>> +	unsigned int i, backend_max_order = 0;
>>  	unsigned int ring_page_order = 0;
>>  
>>  	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>> -			   "max-ring-page-order", "%u", &max_page_order);
>> +			   "max-ring-page-order", "%u", &backend_max_order);
>>  	if (err != 1)
>>  		info->nr_ring_pages = 1;
>>  	else {
>> -		ring_page_order = min(xen_blkif_max_ring_order, max_page_order);
>> +		if (info->max_ring_page_order) {
>> +			/* Dynamic configured through /sys. */
>> +			BUG_ON(info->max_ring_page_order > backend_max_order);
> 
> Maybe I'm missing something, but I don't think you can BUG here. The 
> following flow for example will trigger this BUG:
> 

You are right, this BUG will be triggered after removed the waiting code in this version.
Will be updated.

>  - Admin sets max_ring_page_order = 2 from sysfs, frontend reconfigures.
>  - VM is migrated to a new host without multipage ring support.
>  - BUG will trigger on reconnection (because backend_max_order == 1 and 
>    max_ring_page_order == 2).
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ