[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <555171CF.70106@plumgrid.com>
Date: Mon, 11 May 2015 20:21:51 -0700
From: Alexei Starovoitov <ast@...mgrid.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: Pablo Neira Ayuso <pablo@...filter.org>,
Daniel Borkmann <daniel@...earbox.net>, netdev@...r.kernel.org,
davem@...emloft.net, jhs@...atatu.com
Subject: Re: [PATCH 2/2 net-next] net: move qdisc ingress filtering code where
it belongs
On 5/11/15 4:30 PM, Eric Dumazet wrote:
>
> For example , commit 7866a621043fbaca3d7389e9b9f69dd1a2e5a855
> helped a given workload, but probably made things slower for most common
> cases.
yes, indeed. reverting it improves netif_receive + drop in ip_rcv
from 41.1 to 42.6. I've been trying to come up with a simple way
to roll global ptype_all into skb->dev->ptype_all, but registering
device notifier seems overkill to remove one loop from netif_receive.
Also tried to partially remove pt_prev logic from the first half
of netif_receive and keep it only for deliver_ptype_list_skb() part,
but that didn't help.
Then tried this:
-static int __netif_receive_skb(struct sk_buff *skb)
+static inline int __netif_receive_skb(struct sk_buff *skb)
...
-static int netif_receive_skb_internal(struct sk_buff *skb)
+static inline int netif_receive_skb_internal(struct sk_buff *skb)
it helped to go from 41.1 to 43.1, but size increase not negligible:
text data bss dec hex filename
55990 1667 2856 60513 ec61 dev.o.base
56403 1907 2856 61166 eeee dev.o.inline
inlining only one of them (either __netif_receive_skb or
netif_receive_skb_internal) gives minimal gain.
Still exploring...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists