[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20170925.203611.1769058727594321517.davem@davemloft.net>
Date: Mon, 25 Sep 2017 20:36:11 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: eric.dumazet@...il.com
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH net-next] net: speed up skb_rbtree_purge()
From: Eric Dumazet <eric.dumazet@...il.com>
Date: Sat, 23 Sep 2017 12:39:12 -0700
> From: Eric Dumazet <edumazet@...gle.com>
>
> As measured in my prior patch ("sch_netem: faster rb tree removal"),
> rbtree_postorder_for_each_entry_safe() is nice looking but much slower
> than using rb_next() directly, except when tree is small enough
> to fit in CPU caches (then the cost is the same)
>
> Also note that there is not even an increase of text size :
> $ size net/core/skbuff.o.before net/core/skbuff.o
> text data bss dec hex filename
> 40711 1298 0 42009 a419 net/core/skbuff.o.before
> 40711 1298 0 42009 a419 net/core/skbuff.o
>
>
> From: Eric Dumazet <edumazet@...gle.com>
Applied.
Powered by blists - more mailing lists