[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6035A0D088A63A46850C3988ED045A4B32C1043A@BITCOM1.int.sbss.com.au>
Date: Fri, 19 Oct 2012 01:34:40 +0000
From: James Harper <james.harper@...digoit.com.au>
To: Roger Pau Monne <roger.pau@...rix.com>,
"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>
CC: Oliver Chick <oliver.chick@...rix.com>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [Xen-devel] [PATCH RFC] Persistent grant maps for xen blk
drivers
>
> This patch implements persistent grants for the xen-blk{front,back}
> mechanism. The effect of this change is to reduce the number of unmap
> operations performed, since they cause a (costly) TLB shootdown. This allows
> the I/O performance to scale better when a large number of VMs are
> performing I/O.
>
> Previously, the blkfront driver was supplied a bvec[] from the request
> queue. This was granted to dom0; dom0 performed the I/O and wrote
> directly into the grant-mapped memory and unmapped it; blkfront then
> removed foreign access for that grant. The cost of unmapping scales badly
> with the number of CPUs in Dom0. An experiment showed that when
> Dom0 has 24 VCPUs, and guests are performing parallel I/O to a ramdisk, the
> IPIs from performing unmap's is a bottleneck at 5 guests (at which point
> 650,000 IOPS are being performed in total). If more than 5 guests are used,
> the performance declines. By 10 guests, only
> 400,000 IOPS are being performed.
>
> This patch improves performance by only unmapping when the connection
> between blkfront and back is broken.
I assume network drivers would suffer from the same affliction... Would a more general persistent map solution be worth considering (or be possible)? So a common interface to this persistent mapping allowing the persistent pool to be shared between all drivers in the DomU?
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists