[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5761418F.2050407@mojatatu.com>
Date: Wed, 15 Jun 2016 07:52:47 -0400
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Jason Wang <jasowang@...hat.com>, mst@...hat.com,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
davem@...emloft.net
Cc: eric.dumazet@...il.com, brouer@...hat.com
Subject: Re: [PATCH net-next V2] tun: introduce tx skb ring
On 16-06-15 04:38 AM, Jason Wang wrote:
> We used to queue tx packets in sk_receive_queue, this is less
> efficient since it requires spinlocks to synchronize between producer
> and consumer.
>
> This patch tries to address this by:
>
> - introduce a new mode which will be only enabled with IFF_TX_ARRAY
> set and switch from sk_receive_queue to a fixed size of skb
> array with 256 entries in this mode.
> - introduce a new proto_ops peek_len which was used for peeking the
> skb length.
> - implement a tun version of peek_len for vhost_net to use and convert
> vhost_net to use peek_len if possible.
>
> Pktgen test shows about 18% improvement on guest receiving pps for small
> buffers:
>
> Before: ~1220000pps
> After : ~1440000pps
>
So this is more exercising the skb array improvements. For tun
it would be useful to see general performance numbers on user/kernel
crossing (i.e tun read/write).
If you have the cycles can you run such tests?
cheers,
jamal
Powered by blists - more mailing lists