[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170124150028.4981db42@xeon-e3>
Date: Tue, 24 Jan 2017 15:00:28 -0800
From: Stephen Hemminger <stephen@...workplumber.org>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: davem@...emloft.net, kys@...rosoft.com, netdev@...r.kernel.org,
Stephen Hemminger <sthemmin@...rosoft.com>
Subject: Re: [PATCH 18/18] netvsc: call netif_receive_skb
On Tue, 24 Jan 2017 14:39:19 -0800
Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Tue, 2017-01-24 at 13:06 -0800, Stephen Hemminger wrote:
> > To improve performance, netvsc can call network stack directly and
> > avoid the local backlog queue. This is safe since incoming packets are
> > handled in softirq context already because the receive function
> > callback is called from a tasklet.
>
> Is this tasklet implementing a limit or something ?
The ring only holds a fixed amount of data so there is a limit but
it is quite large.
>
> netif_rx() queues packets to the backlog, which is processed later by
> net_rx_action() like other NAPI, with limit of 64 packets per round.
Since netvsc_receive has to copy all incoming data it is a bottleneck
unto itself. By the time net_rx_action is invoked the cache is stale.
>
> Calling netif_receive_skb() means you can escape this ability to fairly
> distribute the cpu cycles among multiple NAPI.
>
> I do not see range_cnt being capped in netvsc_receive()
There is no cap. NAPI is coming and will help.
Powered by blists - more mailing lists