[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e27a6618-3a68-fa60-53a0-109b4df70482@gmail.com>
Date: Wed, 25 Apr 2018 07:52:15 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Toke Høiland-Jørgensen <toke@...e.dk>,
netdev@...r.kernel.org
Cc: cake@...ts.bufferbloat.net, Dave Taht <dave.taht@...il.com>
Subject: Re: [PATCH net-next v3] Add Common Applications Kept Enhanced (cake)
qdisc
On 04/25/2018 06:42 AM, Toke Høiland-Jørgensen wrote:
> sch_cake targets the home router use case and is intended to squeeze the
> most bandwidth and latency out of even the slowest ISP links and routers,
> while presenting an API simple enough that even an ISP can configure it.
>
* Support for ack filtering.
Oh my god. Cake became a monster.
syzkaller will be very happy to trigger all kind of bugs in it.
Lack of any pskb_may_pull() is really concerning.
How ack filter deals with reorders ?
Also the forced GSO segmentation looks wrong to me.
This kills xmit_more gain we have when GSO is performed after
qdisc dequeue before hitting device.
This should be driven by a parameter really, some threshold on the skb size.
What performance number do you get on a 10Gbit NIC for example ?
Also, how ack filter can suppress packets after skb_gso_segment() ?
+ segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
+ if (IS_ERR_OR_NULL(segs))
+ return qdisc_drop(skb, sch, to_free);
+
+ while (segs) {
+ nskb = segs->next;
+ segs->next = NULL;
+ qdisc_skb_cb(segs)->pkt_len = segs->len;
+ cobalt_set_enqueue_time(segs, now);
+ get_cobalt_cb(segs)->adjusted_len = cake_overhead(q,
+ segs);
+ flow_queue_add(flow, segs);
+
+ if (q->ack_filter)
+ ack = cake_ack_filter(q, flow);
+
All the following must be dead code, right ???
+ if (ack) {
+ b->ack_drops++;
+ sch->qstats.drops++;
+ b->bytes += ack->len;
+ slen += segs->len - ack->len;
+ q->buffer_used += segs->truesize -
+ ack->truesize;
+ if (q->rate_flags & CAKE_FLAG_INGRESS)
+ cake_advance_shaper(q, b, ack,
+ now, true);
+
+ qdisc_tree_reduce_backlog(sch, 1,
+ qdisc_pkt_len(ack));
+ consume_skb(ack);
+ } else {
Powered by blists - more mailing lists