[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <88CC54BC-E6CB-4938-8810-3D371FE07493@neclab.eu>
Date: Wed, 13 May 2015 13:01:51 +0000
From: Joao Martins <Joao.Martins@...lab.eu>
To: David Vrabel <david.vrabel@...rix.com>
CC: "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"wei.liu2@...rix.com" <wei.liu2@...rix.com>,
"ian.campbell@...rix.com" <ian.campbell@...rix.com>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>
Subject: Re: [Xen-devel] [RFC PATCH 00/13] Persistent grant maps for xen net
drivers
On 13 May 2015, at 12:50, David Vrabel <david.vrabel@...rix.com> wrote:
> On 12/05/15 18:18, Joao Martins wrote:
>>
>> Packet I/O Tests:
>>
>> Measured on a Intel Xeon E5-1650 v2, Xen 4.5, no HT. Used pktgen "burst 1"
>> and "clone_skbs 100000" (to avoid alloc skb overheads) with various pkt
>> sizes. All tests are DomU <-> Dom0, unless specified otherwise.
>
> Are all these measurements with a single domU with a single VIF?
>
> The biggest problem with a persistent grant method is the amount of
> grant table and maptrack resources it requires. How well does this
> scale to 1000s of VIFs?
Correct. I was more focused on throughput benefits with persistent grants,
as opposed to scalability with a large number of guests. I will do more tests
with more guests and provide you the numbers. Most likely it won't scale to
that numbers of VIFs, given that the maptrack size increases much quicker
(nr_vifs * 512 * nr_queues grants mapped) thus I also added the option of not
exposing "feature-persistent" if xen-netback.max_persistent_gnts module param is
set to 0. I am aware of these issues with persistent grants, but the case I had
in mind, was fewer VMs with higher throughput, which I believe to be the compromise
with persistent grants.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists