[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110608121128.2caecdb3.yoshikawa.takuya@oss.ntt.co.jp>
Date: Wed, 8 Jun 2011 12:11:28 +0900
From: Takuya Yoshikawa <yoshikawa.takuya@....ntt.co.jp>
To: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
Cc: Avi Kivity <avi@...hat.com>, Marcelo Tosatti <mtosatti@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH 0/15] KVM: optimize for MMIO handled
On Tue, 07 Jun 2011 20:58:06 +0800
Xiao Guangrong <xiaoguangrong@...fujitsu.com> wrote:
> The performance test result:
>
> Netperf (TCP_RR):
> ===========================
> ept is enabled:
>
> Before After
> 1st 709.58 734.60
> 2nd 715.40 723.75
> 3rd 713.45 724.22
>
> ept=0 bypass_guest_pf=0:
>
> Before After
> 1st 706.10 709.63
> 2nd 709.38 715.80
> 3rd 695.90 710.70
>
In what condition, does TCP_RR perform so bad?
On 1Gbps network, directly connecting two Intel servers,
I got 20 times better result before.
Even when I used a KVM guest as the netperf client,
I got more than 10 times better result.
Could you tell me a bit more details of your test?
> Kernbech (do not redirect output to /dev/null)
> ==========================
> ept is enabled:
>
> Before After
> 1st 2m34.749s 2m33.482s
> 2nd 2m34.651s 2m33.161s
> 3rd 2m34.543s 2m34.271s
>
> ept=0 bypass_guest_pf=0:
>
> Before After
> 1st 4m43.467s 4m41.873s
> 2nd 4m45.225s 4m41.668s
> 3rd 4m47.029s 4m40.128s
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists