[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DEEED3C.3070302@cn.fujitsu.com>
Date: Wed, 08 Jun 2011 11:32:12 +0800
From: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To: Takuya Yoshikawa <yoshikawa.takuya@....ntt.co.jp>
CC: Avi Kivity <avi@...hat.com>, Marcelo Tosatti <mtosatti@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH 0/15] KVM: optimize for MMIO handled
On 06/08/2011 11:25 AM, Xiao Guangrong wrote:
> On 06/08/2011 11:11 AM, Takuya Yoshikawa wrote:
>> On Tue, 07 Jun 2011 20:58:06 +0800
>> Xiao Guangrong <xiaoguangrong@...fujitsu.com> wrote:
>>
>>> The performance test result:
>>>
>>> Netperf (TCP_RR):
>>> ===========================
>>> ept is enabled:
>>>
>>> Before After
>>> 1st 709.58 734.60
>>> 2nd 715.40 723.75
>>> 3rd 713.45 724.22
>>>
>>> ept=0 bypass_guest_pf=0:
>>>
>>> Before After
>>> 1st 706.10 709.63
>>> 2nd 709.38 715.80
>>> 3rd 695.90 710.70
>>>
>>
>> In what condition, does TCP_RR perform so bad?
>>
>> On 1Gbps network, directly connecting two Intel servers,
>> I got 20 times better result before.
>>
>> Even when I used a KVM guest as the netperf client,
>> I got more than 10 times better result.
>>
>
> Um, which case did you test? ept = 1 or ept=0 bypass_guest_pf=0 or both?
>
>> Could you tell me a bit more details of your test?
>>
>
> Sure, KVM guest is the client, and it uses e1000 NIC, and uses NAT
> network connect to the netperf server, the bandwidth of our network
> is 100M.
>
And this is my test script:
#!/bin/sh
echo 3 > /proc/sys/vm/drop_caches
./netperf -H $HOST_NAME -p $PORT -t TCP_RR -l 60
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists