[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aa146e80-4cfd-888f-f6d2-bf4ec0eb7373@gmail.com>
Date: Sun, 24 Sep 2017 20:05:30 -0600
From: David Ahern <dsahern@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
David Miller <davem@...emloft.net>
Cc: netdev <netdev@...r.kernel.org>,
Stephen Hemminger <stephen@...workplumber.org>
Subject: Re: [PATCH net-next] sch_netem: faster rb tree removal
On 9/24/17 7:57 PM, David Ahern wrote:
> On 9/23/17 12:07 PM, Eric Dumazet wrote:
>> From: Eric Dumazet <edumazet@...gle.com>
>>
>> While running TCP tests involving netem storing millions of packets,
>> I had the idea to speed up tfifo_reset() and did experiments.
>>
>> I tried the rbtree_postorder_for_each_entry_safe() method that is
>> used in skb_rbtree_purge() but discovered it was slower than the
>> current tfifo_reset() method.
>>
>> I measured time taken to release skbs with three occupation levels :
>> 10^4, 10^5 and 10^6 skbs with three methods :
>>
>> 1) (current 'naive' method)
>>
>> while ((p = rb_first(&q->t_root))) {
>> struct sk_buff *skb = netem_rb_to_skb(p);
>>
>> rb_erase(p, &q->t_root);
>> rtnl_kfree_skbs(skb, skb);
>> }
>>
>> 2) Use rb_next() instead of rb_first() in the loop :
>>
>> p = rb_first(&q->t_root);
>> while (p) {
>> struct sk_buff *skb = netem_rb_to_skb(p);
>>
>> p = rb_next(p);
>> rb_erase(&skb->rbnode, &q->t_root);
>> rtnl_kfree_skbs(skb, skb);
>> }
>>
>> 3) "optimized" method using rbtree_postorder_for_each_entry_safe()
>>
>> struct sk_buff *skb, *next;
>>
>> rbtree_postorder_for_each_entry_safe(skb, next,
>> &q->t_root, rbnode) {
>> rtnl_kfree_skbs(skb, skb);
>> }
>> q->t_root = RB_ROOT;
>>
>> Results :
>>
>> method_1:while (rb_first()) rb_erase() 10000 skbs in 690378 ns (69 ns per skb)
>> method_2:rb_first; while (p) { p = rb_next(p); ...} 10000 skbs in 541846 ns (54 ns per skb)
>> method_3:rbtree_postorder_for_each_entry_safe() 10000 skbs in 868307 ns (86 ns per skb)
>>
>> method_1:while (rb_first()) rb_erase() 99996 skbs in 7804021 ns (78 ns per skb)
>> method_2:rb_first; while (p) { p = rb_next(p); ...} 100000 skbs in 5942456 ns (59 ns per skb)
>> method_3:rbtree_postorder_for_each_entry_safe() 100000 skbs in 11584940 ns (115 ns per skb)
>>
>> method_1:while (rb_first()) rb_erase() 1000000 skbs in 108577838 ns (108 ns per skb)
>> method_2:rb_first; while (p) { p = rb_next(p); ...} 1000000 skbs in 82619635 ns (82 ns per skb)
>> method_3:rbtree_postorder_for_each_entry_safe() 1000000 skbs in 127328743 ns (127 ns per skb)
>>
>> Method 2) is simply faster, probably because it maintains a smaller
>> working size set.
>>
>> Note that this is the method we use in tcp_ofo_queue() already.
>>
>> I will also change skb_rbtree_purge() in a second patch.
>>
>> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
>> ---
>> net/sched/sch_netem.c | 7 ++++---
>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
>> index 063a4bdb9ee6f26b01387959e8f6ccd15ec16191..5a4f1008029068372019a965186e7a3c0a18aac3 100644
>> --- a/net/sched/sch_netem.c
>> +++ b/net/sched/sch_netem.c
>> @@ -361,12 +361,13 @@ static psched_time_t packet_len_2_sched_time(unsigned int len, struct netem_sche
>> static void tfifo_reset(struct Qdisc *sch)
>> {
>> struct netem_sched_data *q = qdisc_priv(sch);
>> - struct rb_node *p;
>> + struct rb_node *p = rb_first(&q->t_root);
>>
>> - while ((p = rb_first(&q->t_root))) {
>> + while (p) {
>> struct sk_buff *skb = netem_rb_to_skb(p);
>>
>> - rb_erase(p, &q->t_root);
>> + p = rb_next(p);
>> + rb_erase(&skb->rbnode, &q->t_root);
>> rtnl_kfree_skbs(skb, skb);
>> }
>> }
>>
>>
>
> Hi Eric:
>
> I'm guessing the cost is in the rb_first and rb_next computations. Did
> you consider something like this:
>
> struct rb_root *root
> struct rb_node **p = &root->rb_node;
>
> while (*p != NULL) {
> struct foobar *fb;
>
> fb = container_of(*p, struct foobar, rb_node);
> // fb processing
rb_erase(&nh->rb_node, root);
> p = &root->rb_node;
> }
>
Oops, dropped the rb_erase in my consolidating the code to this snippet.
Powered by blists - more mailing lists