[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110613133513.GA29884@redhat.com>
Date: Mon, 13 Jun 2011 16:35:13 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Krishna Kumar2 <krkumar2@...ibm.com>
Cc: Christian Borntraeger <borntraeger@...ibm.com>,
Carsten Otte <cotte@...ibm.com>, habanero@...ux.vnet.ibm.com,
Heiko Carstens <heiko.carstens@...ibm.com>,
kvm@...r.kernel.org, lguest@...ts.ozlabs.org,
linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org,
linux390@...ibm.com, netdev@...r.kernel.org,
Rusty Russell <rusty@...tcorp.com.au>,
Martin Schwidefsky <schwidefsky@...ibm.com>, steved@...ibm.com,
Tom Lendacky <tahm@...ux.vnet.ibm.com>,
virtualization@...ts.linux-foundation.org,
Shirley Ma <xma@...ibm.com>
Subject: Re: [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
On Mon, Jun 13, 2011 at 07:02:27PM +0530, Krishna Kumar2 wrote:
> "Michael S. Tsirkin" <mst@...hat.com> wrote on 06/07/2011 09:38:30 PM:
>
> > > This is on top of the patches applied by Rusty.
> > >
> > > Warning: untested. Posting now to give people chance to
> > > comment on the API.
> >
> > OK, this seems to have survived some testing so far,
> > after I dropped patch 4 and fixed build for patch 3
> > (build fixup patch sent in reply to the original).
> >
> > I'll be mostly offline until Sunday, would appreciate
> > testing reports.
>
> Hi Michael,
>
> I ran the latest patches with 1K I/O (guest->local host) and
> the results are (60 sec run for each test case):
Hi!
Did you apply this one:
[PATCHv2 RFC 4/4] Revert "virtio: make add_buf return capacity remaining"
?
It turns out that that patch has a bug and should be reverted,
only patches 1-3 should be applied.
Could you confirm please?
> ___________________/
> #sessions BW% SD%
> ______________________________
> 1 -25.6 47.0
> 2 -29.3 22.9
> 4 .8 1.6
> 8 1.6 0
> 16 -1.6 4.1
> 32 -5.3 2.1
> 48 11.3 -7.8
> 64 -2.8 .7
> 96 -6.2 .6
> 128 -10.6 12.7
> ______________________________
> BW: -4.8 SD: 5.4
>
> I tested it again to see if the regression is fleeting (since
> the numbers vary quite a bit for 1K I/O even between guest->
> local host), but:
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 14.0 -17.3
> 2 19.9 -11.1
> 4 7.9 -15.3
> 8 9.6 -13.1
> 16 1.2 -7.3
> 32 -.6 -13.5
> 48 -28.7 10.0
> 64 -5.7 -.7
> 96 -9.4 -8.1
> 128 -9.4 .7
> ______________________________
> BW: -3.7 SD: -2.0
>
>
> With 16K, there was an improvement in SD, but
> higher sessions seem to slightly degrade BW/SD:
>
> ______________________________
> #sessions BW% SD%
> ______________________________
> 1 30.9 -25.0
> 2 16.5 -19.4
> 4 -1.3 7.9
> 8 1.4 6.2
> 16 3.9 -5.4
> 32 0 4.3
> 48 -.5 .1
> 64 32.1 -1.5
> 96 -2.1 23.2
> 128 -7.4 3.8
> ______________________________
> BW: 5.0 SD: 7.5
>
>
> Thanks,
>
> - KK
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists