[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <564B4443.90108@linux.intel.com>
Date: Tue, 17 Nov 2015 17:14:11 +0200
From: Eliezer Tamir <eliezer.tamir@...ux.intel.com>
To: Eric Dumazet <edumazet@...gle.com>,
"David S . Miller" <davem@...emloft.net>
Cc: netdev <netdev@...r.kernel.org>, Eli Cohen <eli@...lanox.com>,
Amir Vadai <amirv@...lanox.com>,
Ariel Elior <ariel.elior@...gic.com>,
Willem de Bruijn <willemb@...gle.com>,
Rida Assaf <rida@...gle.com>,
Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH net-next 5/9] net: network drivers no longer need to
implement ndo_busy_poll()
On 17/11/2015 15:57, Eric Dumazet wrote:
> Instead of having to implement complex ndo_busy_poll() method,
> drivers can simply rely on NAPI poll logic.
I really like where you are going with this series.
...
> We could go one step further, and make busy polling
> available for all NAPI drivers, but this would require
> that all netif_napi_del() calls are done in process context
> so that we can call synchronize_rcu().
> Full audit would be required.
>
> Before this is done, a driver still needs to call :
>
> - skb_mark_napi_id() for each skb provided to the stack, although we can
> easily do this directly in core networking stack in a future patch.
>
> - napi_hash_add() and napi_hash_del() to allocate/free a napi_id per napi.
>
> - Make sure RCU grace period is respected after napi_hash_del() before
> memory containing napi structure is freed.
Can we move all of these into the NAPI infrastructure?
Maybe hash add/del can be inside netif_napi_add/del.
Some way to force the right RCU behavior, and busy polling is
completely driver-agnostic, which IMHO outweighs the small gains
you can have by micro-optimizing ndo_busy_poll.
On another note, any thoughts about unifying poll_controller with
regular poll?
cheers,
Eliezer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists