lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 19 Oct 2015 11:42:39 +0200
From:	Roger Pau Monné <roger.pau@...rix.com>
To:	Bob Liu <bob.liu@...cle.com>
CC:	<xen-devel@...ts.xen.org>, <david.vrabel@...rix.com>,
	<linux-kernel@...r.kernel.org>, <konrad.wilk@...cle.com>,
	<felipe.franciosi@...rix.com>, <axboe@...com>, <hch@...radead.org>,
	<avanzini.arianna@...il.com>, <rafal.mielniczuk@...rix.com>,
	<boris.ostrovsky@...cle.com>, <jonathan.davies@...rix.com>
Subject: Re: [PATCH v3 3/9] xen/blkfront: separate per ring information out of
 device info

El 10/10/15 a les 10.30, Bob Liu ha escrit:
> 
> On 10/03/2015 01:02 AM, Roger Pau Monné wrote:
>> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>>> Split per ring information to an new structure:blkfront_ring_info, also rename
>>> per blkfront_info to blkfront_dev_info.
>>   ^ removed.
>>>
>>> A ring is the representation of a hardware queue, every vbd device can associate
>>> with one or more blkfront_ring_info depending on how many hardware
>>> queues/rings to be used.
>>>
>>> This patch is a preparation for supporting real multi hardware queues/rings.
>>>
>>> Signed-off-by: Arianna Avanzini <avanzini.arianna@...il.com>
>>> Signed-off-by: Bob Liu <bob.liu@...cle.com>
>>> ---
>>>  drivers/block/xen-blkfront.c |  854 ++++++++++++++++++++++--------------------
>>>  1 file changed, 445 insertions(+), 409 deletions(-)
>>>
>>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>>> index 5dd591d..bf416d5 100644
>>> --- a/drivers/block/xen-blkfront.c
>>> +++ b/drivers/block/xen-blkfront.c
>>> @@ -107,7 +107,7 @@ static unsigned int xen_blkif_max_ring_order;
>>>  module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, S_IRUGO);
>>>  MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring");
>>>  
>>> -#define BLK_RING_SIZE(info) __CONST_RING_SIZE(blkif, PAGE_SIZE * (info)->nr_ring_pages)
>>> +#define BLK_RING_SIZE(dinfo) __CONST_RING_SIZE(blkif, PAGE_SIZE * (dinfo)->nr_ring_pages)
>>
>> This change looks pointless, any reason to use dinfo instead of info?
>>
>>>  #define BLK_MAX_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE * XENBUS_MAX_RING_PAGES)
>>>  /*
>>>   * ring-ref%i i=(-1UL) would take 11 characters + 'ring-ref' is 8, so 19
>>> @@ -116,12 +116,31 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the
>>>  #define RINGREF_NAME_LEN (20)
>>>  
>>>  /*
>>> + *  Per-ring info.
>>> + *  Every blkfront device can associate with one or more blkfront_ring_info,
>>> + *  depending on how many hardware queues to be used.
>>> + */
>>> +struct blkfront_ring_info
>>> +{
>>> +	struct blkif_front_ring ring;
>>> +	unsigned int ring_ref[XENBUS_MAX_RING_PAGES];
>>> +	unsigned int evtchn, irq;
>>> +	struct work_struct work;
>>> +	struct gnttab_free_callback callback;
>>> +	struct blk_shadow shadow[BLK_MAX_RING_SIZE];
>>> +	struct list_head grants;
>>> +	struct list_head indirect_pages;
>>> +	unsigned int persistent_gnts_c;
>>
>> persistent grants should be per-device, not per-queue IMHO. Is it really
>> hard to make this global instead of per-queue?
>>
> 
> I didn't see the benefit of making it per-device, but disadvantages instead:
> If persistent grants are per-device, then we have to introduce an extra lock to protect this list.
> Which will complicate the code and may slow down the performance when the queue number is large e.g 16 queues.

IMHO, and as I said in the reply to patch 7, there's no way to know that
unless you actually implement it, and I think it was easier to just add
locks around existing functions without moving the data structures
(leaving them per-device).

Also, you didn't want to enable multiple queues by default because of
the RAM usage, if we make all this per-device RAM usage is not going to
be increased much, which will mean we could enable multiple queues by
default with a sensible value (4 maybe?). TBH, I don't think we are
going to see contention with 4 queues per device.

Roger.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ