lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Oct 2012 10:26:32 +0200
From:	Roger Pau Monné <roger.pau@...rix.com>
To:	James Harper <james.harper@...digoit.com.au>
CC:	"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
	Oliver Chick <oliver.chick@...rix.com>,
	"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [Xen-devel] [PATCH RFC] Persistent grant maps for xen blk drivers

On 19/10/12 03:34, James Harper wrote:
>>
>> This patch implements persistent grants for the xen-blk{front,back}
>> mechanism. The effect of this change is to reduce the number of unmap
>> operations performed, since they cause a (costly) TLB shootdown. This allows
>> the I/O performance to scale better when a large number of VMs are
>> performing I/O.
>>
>> Previously, the blkfront driver was supplied a bvec[] from the request
>> queue. This was granted to dom0; dom0 performed the I/O and wrote
>> directly into the grant-mapped memory and unmapped it; blkfront then
>> removed foreign access for that grant. The cost of unmapping scales badly
>> with the number of CPUs in Dom0. An experiment showed that when
>> Dom0 has 24 VCPUs, and guests are performing parallel I/O to a ramdisk, the
>> IPIs from performing unmap's is a bottleneck at 5 guests (at which point
>> 650,000 IOPS are being performed in total). If more than 5 guests are used,
>> the performance declines. By 10 guests, only
>> 400,000 IOPS are being performed.
>>
>> This patch improves performance by only unmapping when the connection
>> between blkfront and back is broken.
> 
> I assume network drivers would suffer from the same affliction... Would a more general persistent map solution be worth considering (or be possible)? So a common interface to this persistent mapping allowing the persistent pool to be shared between all drivers in the DomU?

Yes, there are plans to implement the same for network drivers. I would
generally avoid having a shared pool of grants for all the devices of a
DomU, as said in the description of the patch:

Blkback stores a mapping of grefs=>{page mapped to by gref} in
a red-black tree. As the grefs are not known apriori, and provide no
guarantees on their ordering, we have to perform a search
through this tree to find the page, for every gref we receive. This
operation takes O(log n) time in the worst case.

Having a shared pool with all grants would mean that n will become much
higher, and so the search time for a grant would increase. Also, if the
pool is shared some kind of concurrency control should be added, which
will make it even slower.

What should be shared are the structures used to save the grants (struct
grant and struct persistent_gnt for front and back drivers
respectively), and the red-black tree implementation (foreach_grant,
add_persistent_gnt and get_persistent_gnt). This functions should be
moved to a common header for the net implementation.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ