lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140911234530.GB2052@gmail.com>
Date:	Fri, 12 Sep 2014 01:45:31 +0200
From:	Arianna Avanzini <avanzini.arianna@...il.com>
To:	David Vrabel <david.vrabel@...rix.com>
Cc:	konrad.wilk@...cle.com, boris.ostrovsky@...cle.com,
	xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
	bob.liu@...cle.com, felipe.franciosi@...rix.com, axboe@...com
Subject: Re: [PATCH RFC 4/4] xen, blkback: add support for multiple block
 rings

On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
> On 22/08/14 12:20, Arianna Avanzini wrote:
> > This commit adds to xen-blkback the support to retrieve the block
> > layer API being used and the number of available hardware queues,
> > in case the block layer is using the multi-queue API. This commit
> > also lets the driver advertise the number of available hardware
> > queues to the frontend via XenStore, therefore allowing for actual
> > multiple I/O rings to be used.
> 
> Does it make sense for number of queues should be dependent on the
> number of queues available in the underlying block device?  

Thank you for raising that point. It probably is not the best solution.

Bob Liu suggested to have the number of I/O rings depend on the number
of vCPUs in the driver domain. Konrad Wilk suggested to compute the
number of I/O rings according to the following formula to preserve the
possibility to explicitly define the number of hardware queues to be
exposed to the frontend:
what_backend_exposes = some_module_parameter ? :
                   min(nr_online_cpus(), nr_hardware_queues()).
io_rings = min(nr_online_cpus(), what_backend_exposes);

(Please do correct me if I misunderstood your point)

> What
> behaviour do we want when a domain is migrated to a host with different
> storage?
> 

This first patchset does not include support to migrate a multi-queue-capable
domU to a host with different storage. The second version, which I am posting
now, includes it. The behavior I have implemented as of now lets the frontend
use the same number of rings, if the backend is still multi-queue-capable
after the migration, otherwise it exposes one only ring.

> Can you split this patch up as well?

Sure, thank you for the comments.

> 
> David

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ