[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpXjTO7T_i-9tPw_xtwc3G91GDVHF_xc=J3xN+2dU+-F_Q@mail.gmail.com>
Date: Thu, 9 Jul 2020 23:21:08 -0700
From: Cong Wang <xiyou.wangcong@...il.com>
To: "YU, Xiangning" <xiangning.yu@...baba-inc.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 2/2] net: sched: Lockless Token Bucket (LTB) Qdisc
On Thu, Jul 9, 2020 at 11:07 PM YU, Xiangning
<xiangning.yu@...baba-inc.com> wrote:
>
>
> On 7/9/20 10:20 PM, Cong Wang wrote:
> > On Thu, Jul 9, 2020 at 10:04 PM Cong Wang <xiyou.wangcong@...il.com> wrote:
> >> IOW, without these *additional* efforts, it is broken in terms of
> >> out-of-order.
> >>
> >
> > Take a look at fq_codel, it provides a hash function for flow classification,
> > fq_codel_hash(), as default, thus its default configuration does not
> > have such issues. So, you probably want to provide such a hash
> > function too instead of a default class.
> >
> If I understand this code correctly, this socket hash value identifies a flow. Essentially it serves the same purpose as socket priority. In this patch, we use a similar classification method like htb, but without filters.
How is it any similar to HTB? HTB does not have a per-cpu queue
for each class. This is a huge difference.
>
> We could provide a hash function, but I'm a bit confused about the problem we are trying to solve.
Probably more than that, you need to ensure the packets in a same flow
are queued on the same queue.
Let say you have two packets P1 and P2 from the same flow (P1 is before P2),
you can classify them into the same class of course, but with per-cpu queues
they can be sent out in a wrong order too:
send(P1) on CPU1 -> classify() returns default class -> P1 is queued on
the CPU1 queue of default class
(Now process is migrated to CPU2)
send(P2) on CPU2 -> classify() returns default class -> P2 is queued on
the CPU2 queue of default class
P2 is dequeued on CPU2 before P1 dequeued on CPU1.
Now, out of order. :)
Hope it is clear now.
Thanks.
Powered by blists - more mailing lists