[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9c656cda-d197-b4c1-8700-553a138d54c7@digirati.com.br>
Date: Thu, 10 May 2018 15:06:37 -0400
From: Michel Machado <michel@...irati.com.br>
To: Cong Wang <xiyou.wangcong@...il.com>
Cc: Nishanth Devarajan <ndev2021@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Jamal Hadi Salim <jhs@...atatu.com>,
David Miller <davem@...emloft.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Cody Doucette <doucette@...edu>
Subject: Re: [PATCH net-next] net:sched: add gkprio scheduler
On 05/10/2018 01:38 PM, Cong Wang wrote:
> On Wed, May 9, 2018 at 7:09 AM, Michel Machado <michel@...irati.com.br> wrote:
>> On 05/08/2018 10:24 PM, Cong Wang wrote:
>>>
>>> On Tue, May 8, 2018 at 5:59 AM, Michel Machado <michel@...irati.com.br>
>>> wrote:
>>>>>>
>>>>>> Overall it looks good to me, just one thing below:
>>>>>>
>>>>>>> +struct Qdisc_ops gkprio_qdisc_ops __read_mostly = {
>>>>>>> + .id = "gkprio",
>>>>>>> + .priv_size = sizeof(struct gkprio_sched_data),
>>>>>>> + .enqueue = gkprio_enqueue,
>>>>>>> + .dequeue = gkprio_dequeue,
>>>>>>> + .peek = qdisc_peek_dequeued,
>>>>>>> + .init = gkprio_init,
>>>>>>> + .reset = gkprio_reset,
>>>>>>> + .change = gkprio_change,
>>>>>>> + .dump = gkprio_dump,
>>>>>>> + .destroy = gkprio_destroy,
>>>>>>> + .owner = THIS_MODULE,
>>>>>>> +};
>>>>>>
>>>>>>
>>>>>>
>>>>>> You probably want to add Qdisc_class_ops here so that you can
>>>>>> dump the stats of each internal queue.
>>>>
>>>>
>>>>
>>>> Hi Cong,
>>>>
>>>> In the production scenario we are targeting, this priority queue must
>>>> be
>>>> classless; being classful would only bloat the code for us. I don't see
>>>> making this queue classful as a problem per se, but I suggest leaving it
>>>> as
>>>> a future improvement for when someone can come up with a useful scenario
>>>> for
>>>> it.
>>>
>>>
>>>
>>> Take a look at sch_prio, it is fairly simple since your internal
>>> queues are just an array... Per-queue stats are quite useful
>>> in production, we definitely want to observe which queues are
>>> full which are not.
>>>
>>
>> DSprio cannot add Qdisc_class_ops without a rewrite of other queue
>> disciplines, which doesn't seem desirable. Since the method cops->leaf is
>> required (see register_qdisc()), we would need to replace the array struct
>> sk_buff_head qdiscs[GKPRIO_MAX_PRIORITY] in struct gkprio_sched_data with
>> the array struct Qdisc *queues[GKPRIO_MAX_PRIORITY] to be able to return a
>> Qdisc in dsprio_leaf(). The problem with this change is that Qdisc does not
>> have a method to dequeue from its tail. This new method may not even make
>> sense in other queue disciplines. But without this method, gkprio_enqueue()
>> cannot drop the lowest priority packet when the queue is full and an
>> incoming packet has higher priority.
>
> Sorry for giving you a bad example. Take a look at sch_fq_codel instead,
> it returns NULL for ->leaf() and maps its internal flows to classes.
>
> I thought sch_prio uses internal qdiscs, but I was wrong, as you noticed
> it actually exposes them to user via classes.
>
> My point is never to make it classful, just want to expose the useful stats,
> like how fq_codel dumps its internal flows.
>
>
>>
>> Nevertheless, I see your point on being able to observe the distribution of
>> queued packets per priority. A solution for that would be to add the array
>> __u32 qlen[GKPRIO_MAX_PRIORITY] in struct tc_gkprio_qopt. This solution even
>> avoids adding overhead in the critical paths of DSprio. Do you see a better
>> solution?
>
> I believe you can return NULL for ->leaf() and don't need to worry about
> ->graft() either. ;)
Thank you for pointing sch_fq_codel out. We'll follow its example.
[ ]'s
Michel Machado
Powered by blists - more mailing lists