[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52E8A9F1.3000700@huawei.com>
Date: Wed, 29 Jan 2014 15:12:49 +0800
From: Qin Chuanyu <qinchuanyu@...wei.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: <jasowang@...hat.com>, "Michael S. Tsirkin" <mst@...hat.com>,
"Anthony Liguori" <anthony@...emonkey.ws>,
KVM list <kvm@...r.kernel.org>, <netdev@...r.kernel.org>,
Peter Klausler <pmk@...gle.com>
Subject: Re: 8% performance improved by change tap interact with kernel stack
On 2014/1/28 22:49, Eric Dumazet wrote:
> On Tue, 2014-01-28 at 16:14 +0800, Qin Chuanyu wrote:
>> according perf test result,I found that there are 5%-8% cpu cost on
>> softirq by use netif_rx_ni called in tun_get_user.
>>
>> so I changed the function which cause skb transmitted more quickly.
>> from
>> tun_get_user ->
>> netif_rx_ni(skb);
>> to
>> tun_get_user ->
>> rcu_read_lock_bh();
>> netif_receive_skb(skb);
>> rcu_read_unlock_bh();
>
> No idea why you use rcu here ?
In my first version, I forgot to add lock when called netif_receive_skb
then I met a dad spinlock when using tcpdump.
tcpdump receive skb in netif_receive_skb but also in dev_queue_xmit.
and I have notice dev_queue_xmit add rcu_read_lock_bh before
transmitting skb, and this lock avoid race between softirq and transmit
thread.
/* Disable soft irqs for various locks below. Also
* stops preemption for RCU.
*/
rcu_read_lock_bh();
Now I try to xmit skb in vhost thread, so I did the same thing.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists