[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110219112842.GE2782@psychotron.redhat.com>
Date: Sat, 19 Feb 2011 12:28:43 +0100
From: Jiri Pirko <jpirko@...hat.com>
To: Nicolas de Pesloüan
<nicolas.2p.debian@...il.com>
Cc: Jay Vosburgh <fubar@...ibm.com>,
David Miller <davem@...emloft.net>, kaber@...sh.net,
eric.dumazet@...il.com, netdev@...r.kernel.org,
shemminger@...ux-foundation.org, andy@...yhouse.net
Subject: Re: [patch net-next-2.6 V3] net: convert bonding to use rx_handler
Sat, Feb 19, 2011 at 12:08:31PM CET, jpirko@...hat.com wrote:
>Sat, Feb 19, 2011 at 11:56:23AM CET, nicolas.2p.debian@...il.com wrote:
>>Le 19/02/2011 09:05, Jiri Pirko a écrit :
>>>This patch converts bonding to use rx_handler. Results in cleaner
>>>__netif_receive_skb() with much less exceptions needed. Also
>>>bond-specific work is moved into bond code.
>>>
>>>Signed-off-by: Jiri Pirko<jpirko@...hat.com>
>>>
>>>v1->v2:
>>> using skb_iif instead of new input_dev to remember original
>>> device
>>>v2->v3:
>>> set orig_dev = skb->dev if skb_iif is set
>>>
>>
>>Why do we need to let the rx_handlers call netif_rx() or __netif_receive_skb()?
>>
>>Bonding used to be handled with very few overhead, simply replacing
>>skb->dev with skb->dev->master. Time has passed and we eventually
>>added many special processing for bonding into __netif_receive_skb(),
>>but the overhead remained very light.
>>
>>Calling netif_rx() (or __netif_receive_skb()) to allow nesting would probably lead to some overhead.
>>
>>Can't we, instead, loop inside __netif_receive_skb(), and deliver
>>whatever need to be delivered, to whoever need, inside the loop ?
>>
>>rx_handler = rcu_dereference(skb->dev->rx_handler);
>>while (rx_handler) {
>> /* ... */
>> orig_dev = skb->dev;
>> skb = rx_handler(skb);
>> /* ... */
>> rx_handler = (skb->dev != orig_dev) ? rcu_dereference(skb->dev->rx_handler) : NULL;
>>}
>>
>>This would reduce the overhead, while still allowing nesting: vlan on
>>top on bonding, bridge on top on bonding, ...
>
>I see your point. Makes sense to me. But the loop would have to include
>at least processing of ptype_all too. I'm going to cook a follow-up
>patch.
>
DRAFT (doesn't modify rx_handlers):
diff --git a/net/core/dev.c b/net/core/dev.c
index 4ebf7fe..e5dba47 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3115,6 +3115,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
{
struct packet_type *ptype, *pt_prev;
rx_handler_func_t *rx_handler;
+ struct net_device *dev;
struct net_device *orig_dev;
struct net_device *null_or_dev;
int ret = NET_RX_DROP;
@@ -3129,7 +3130,9 @@ static int __netif_receive_skb(struct sk_buff *skb)
if (netpoll_receive_skb(skb))
return NET_RX_DROP;
- __this_cpu_inc(softnet_data.processed);
+ skb->skb_iif = skb->dev->ifindex;
+ orig_dev = skb->dev;
+
skb_reset_network_header(skb);
skb_reset_transport_header(skb);
skb->mac_len = skb->network_header - skb->mac_header;
@@ -3138,12 +3141,9 @@ static int __netif_receive_skb(struct sk_buff *skb)
rcu_read_lock();
- if (!skb->skb_iif) {
- skb->skb_iif = skb->dev->ifindex;
- orig_dev = skb->dev;
- } else {
- orig_dev = dev_get_by_index_rcu(dev_net(skb->dev), skb->skb_iif);
- }
+another_round:
+ __this_cpu_inc(softnet_data.processed);
+ dev = skb->dev;
#ifdef CONFIG_NET_CLS_ACT
if (skb->tc_verd & TC_NCLS) {
@@ -3153,7 +3153,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
#endif
list_for_each_entry_rcu(ptype, &ptype_all, list) {
- if (!ptype->dev || ptype->dev == skb->dev) {
+ if (!ptype->dev || ptype->dev == dev) {
if (pt_prev)
ret = deliver_skb(skb, pt_prev, orig_dev);
pt_prev = ptype;
@@ -3167,7 +3167,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
ncls:
#endif
- rx_handler = rcu_dereference(skb->dev->rx_handler);
+ rx_handler = rcu_dereference(dev->rx_handler);
if (rx_handler) {
if (pt_prev) {
ret = deliver_skb(skb, pt_prev, orig_dev);
@@ -3176,6 +3176,8 @@ ncls:
skb = rx_handler(skb);
if (!skb)
goto out;
+ if (dev != skb->dev)
+ goto another_round;
}
if (vlan_tx_tag_present(skb)) {
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists