lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Sep 2014 11:13:31 +0800
From:	Bob Liu <bob.liu@...cle.com>
To:	avanzini.arianna@...il.com
CC:	David Vrabel <david.vrabel@...rix.com>, konrad.wilk@...cle.com,
	boris.ostrovsky@...cle.com, xen-devel@...ts.xenproject.org,
	linux-kernel@...r.kernel.org, felipe.franciosi@...rix.com,
	axboe@...com
Subject: Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block
 rings


On 09/12/2014 07:45 AM, Arianna Avanzini wrote:
> On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
>> On 22/08/14 12:20, Arianna Avanzini wrote:
>>> This commit adds to xen-blkback the support to retrieve the block
>>> layer API being used and the number of available hardware queues,
>>> in case the block layer is using the multi-queue API. This commit
>>> also lets the driver advertise the number of available hardware
>>> queues to the frontend via XenStore, therefore allowing for actual
>>> multiple I/O rings to be used.
>>
>> Does it make sense for number of queues should be dependent on the
>> number of queues available in the underlying block device?  
> 
> Thank you for raising that point. It probably is not the best solution.
> 
> Bob Liu suggested to have the number of I/O rings depend on the number
> of vCPUs in the driver domain. Konrad Wilk suggested to compute the
> number of I/O rings according to the following formula to preserve the
> possibility to explicitly define the number of hardware queues to be
> exposed to the frontend:
> what_backend_exposes = some_module_parameter ? :
>                    min(nr_online_cpus(), nr_hardware_queues()).
> io_rings = min(nr_online_cpus(), what_backend_exposes);
> 
> (Please do correct me if I misunderstood your point)

Since xen-netfront/xen-netback driver have already implemented
multi-queue, I'd like we can use the same way as the net driver
negotiate of number of queues.

Thanks,
-Bob
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ