[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140128094138.GA17332@redhat.com>
Date: Tue, 28 Jan 2014 11:41:38 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Qin Chuanyu <qinchuanyu@...wei.com>
Cc: jasowang@...hat.com, Anthony Liguori <anthony@...emonkey.ws>,
KVM list <kvm@...r.kernel.org>, netdev@...r.kernel.org
Subject: Re: 8% performance improved by change tap interact with kernel stack
On Tue, Jan 28, 2014 at 05:14:46PM +0800, Qin Chuanyu wrote:
> On 2014/1/28 16:34, Michael S. Tsirkin wrote:
> >On Tue, Jan 28, 2014 at 04:14:12PM +0800, Qin Chuanyu wrote:
> >>according perf test result,I found that there are 5%-8% cpu cost on
> >>softirq by use netif_rx_ni called in tun_get_user.
> >>
> >>so I changed the function which cause skb transmitted more quickly.
> >>from
> >> tun_get_user ->
> >> netif_rx_ni(skb);
> >>to
> >> tun_get_user ->
> >> rcu_read_lock_bh();
> >> netif_receive_skb(skb);
> >> rcu_read_unlock_bh();
> >>
> >>The test result is as below:
> >> CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
> >> NIC: intel 82599
> >> Host OS/Guest OS:suse11sp3
> >> Qemu-1.6
> >> netperf udp 512(VM tx)
> >> test model: VM->host->host
> >>
> >> modified before : 2.00Gbps 461146pps
> >> modified after : 2.16Gbps 498782pps
> >>
> >>8% performance gained from this change,
> >>Is there any problem for this patch ?
> >
> >I think it's okay - IIUC this way we are processing xmit directly
> >instead of going through softirq.
> >Was meaning to try this - I'm glad you are looking into this.
> >
> >Could you please check latency results?
> >
> netperf UDP_RR 512
> test model: VM->host->host
>
> modified before : 11108
> modified after : 11480
>
> 3% gained by this patch
>
>
Nice.
What about CPU utilization?
It's trivially easy to speed up networking by
burning up a lot of CPU so we must make sure it's
not doing that.
And I think we should see some tests with TCP as well, and
try several message sizes.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists