[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpX5iwRN5C7oQ3X9F0viG8Ya15C0gcYFR_CJDxqhXCPYqQ@mail.gmail.com>
Date: Thu, 10 May 2018 10:38:57 -0700
From: Cong Wang <xiyou.wangcong@...il.com>
To: Michel Machado <michel@...irati.com.br>
Cc: Nishanth Devarajan <ndev2021@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Jamal Hadi Salim <jhs@...atatu.com>,
David Miller <davem@...emloft.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Cody Doucette <doucette@...edu>
Subject: Re: [PATCH net-next] net:sched: add gkprio scheduler
On Wed, May 9, 2018 at 7:09 AM, Michel Machado <michel@...irati.com.br> wrote:
> On 05/08/2018 10:24 PM, Cong Wang wrote:
>>
>> On Tue, May 8, 2018 at 5:59 AM, Michel Machado <michel@...irati.com.br>
>> wrote:
>>>>>
>>>>> Overall it looks good to me, just one thing below:
>>>>>
>>>>>> +struct Qdisc_ops gkprio_qdisc_ops __read_mostly = {
>>>>>> + .id = "gkprio",
>>>>>> + .priv_size = sizeof(struct gkprio_sched_data),
>>>>>> + .enqueue = gkprio_enqueue,
>>>>>> + .dequeue = gkprio_dequeue,
>>>>>> + .peek = qdisc_peek_dequeued,
>>>>>> + .init = gkprio_init,
>>>>>> + .reset = gkprio_reset,
>>>>>> + .change = gkprio_change,
>>>>>> + .dump = gkprio_dump,
>>>>>> + .destroy = gkprio_destroy,
>>>>>> + .owner = THIS_MODULE,
>>>>>> +};
>>>>>
>>>>>
>>>>>
>>>>> You probably want to add Qdisc_class_ops here so that you can
>>>>> dump the stats of each internal queue.
>>>
>>>
>>>
>>> Hi Cong,
>>>
>>> In the production scenario we are targeting, this priority queue must
>>> be
>>> classless; being classful would only bloat the code for us. I don't see
>>> making this queue classful as a problem per se, but I suggest leaving it
>>> as
>>> a future improvement for when someone can come up with a useful scenario
>>> for
>>> it.
>>
>>
>>
>> Take a look at sch_prio, it is fairly simple since your internal
>> queues are just an array... Per-queue stats are quite useful
>> in production, we definitely want to observe which queues are
>> full which are not.
>>
>
> DSprio cannot add Qdisc_class_ops without a rewrite of other queue
> disciplines, which doesn't seem desirable. Since the method cops->leaf is
> required (see register_qdisc()), we would need to replace the array struct
> sk_buff_head qdiscs[GKPRIO_MAX_PRIORITY] in struct gkprio_sched_data with
> the array struct Qdisc *queues[GKPRIO_MAX_PRIORITY] to be able to return a
> Qdisc in dsprio_leaf(). The problem with this change is that Qdisc does not
> have a method to dequeue from its tail. This new method may not even make
> sense in other queue disciplines. But without this method, gkprio_enqueue()
> cannot drop the lowest priority packet when the queue is full and an
> incoming packet has higher priority.
Sorry for giving you a bad example. Take a look at sch_fq_codel instead,
it returns NULL for ->leaf() and maps its internal flows to classes.
I thought sch_prio uses internal qdiscs, but I was wrong, as you noticed
it actually exposes them to user via classes.
My point is never to make it classful, just want to expose the useful stats,
like how fq_codel dumps its internal flows.
>
> Nevertheless, I see your point on being able to observe the distribution of
> queued packets per priority. A solution for that would be to add the array
> __u32 qlen[GKPRIO_MAX_PRIORITY] in struct tc_gkprio_qopt. This solution even
> avoids adding overhead in the critical paths of DSprio. Do you see a better
> solution?
I believe you can return NULL for ->leaf() and don't need to worry about
->graft() either. ;)
>
> By the way, I've used GKPRIO_MAX_PRIORITY and other names that include
> "gkprio" above to reflect the version 1 of this patch that we are
> discussing. We will rename these identifiers for version 2 of this patch to
> replace "gkprio" with "dsprio".
>
Sounds good.
Thanks.
Powered by blists - more mailing lists