[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1372275874.3301.206.camel@edumazet-glaptop>
Date: Wed, 26 Jun 2013 12:44:34 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: netdev <netdev@...r.kernel.org>
Subject: Re: [RFC] about "net: orphan frags on receive" insanity
On Wed, 2013-06-26 at 22:22 +0300, Michael S. Tsirkin wrote:
> The point is we don't know the final destination of the packet
> until it's going through the stack.
>
> We don't want to trigger a copy for all data we get from tun:
> we only want to do this if the data has a chance to get
> queued somewhere indefinitely.
I think you missed my point.
I am pretty sure it should be done from netif_rx(), not from
__netif_receive_skb_core()
so that modern NIC devices do not have to pay this extra cost.
# size net/core/dev_*.o
text data bss dec hex filename
41928 963 752 43643 aa7b net/core/dev_before.o
41579 963 752 43294 a91e net/core/dev_after.o
Untested patch :
diff --git a/net/core/dev.c b/net/core/dev.c
index fc1e289..3730318 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1646,8 +1646,6 @@ static inline int deliver_skb(struct sk_buff *skb,
struct packet_type *pt_prev,
struct net_device *orig_dev)
{
- if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC)))
- return -ENOMEM;
atomic_inc(&skb->users);
return pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
}
@@ -3133,6 +3131,9 @@ int netif_rx(struct sk_buff *skb)
if (netpoll_rx(skb))
return NET_RX_DROP;
+ if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC)))
+ return NET_RX_DROP;
+
net_timestamp_check(netdev_tstamp_prequeue, skb);
trace_netif_rx(skb);
@@ -3498,10 +3499,7 @@ ncls:
}
if (pt_prev) {
- if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC)))
- goto drop;
- else
- ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
+ ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
} else {
drop:
atomic_long_inc(&skb->dev->rx_dropped);
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists