lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d450d9da-a1d2-4aae-1ed7-edc2d0972119@digirati.com.br>
Date:   Wed, 9 May 2018 10:09:40 -0400
From:   Michel Machado <michel@...irati.com.br>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     Nishanth Devarajan <ndev2021@...il.com>,
        Jiri Pirko <jiri@...nulli.us>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        David Miller <davem@...emloft.net>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Cody Doucette <doucette@...edu>
Subject: Re: [PATCH net-next] net:sched: add gkprio scheduler

On 05/08/2018 10:24 PM, Cong Wang wrote:
> On Tue, May 8, 2018 at 5:59 AM, Michel Machado <michel@...irati.com.br> wrote:
>>>> Overall it looks good to me, just one thing below:
>>>>
>>>>> +struct Qdisc_ops gkprio_qdisc_ops __read_mostly = {
>>>>> +       .id             =       "gkprio",
>>>>> +       .priv_size      =       sizeof(struct gkprio_sched_data),
>>>>> +       .enqueue        =       gkprio_enqueue,
>>>>> +       .dequeue        =       gkprio_dequeue,
>>>>> +       .peek           =       qdisc_peek_dequeued,
>>>>> +       .init           =       gkprio_init,
>>>>> +       .reset          =       gkprio_reset,
>>>>> +       .change         =       gkprio_change,
>>>>> +       .dump           =       gkprio_dump,
>>>>> +       .destroy        =       gkprio_destroy,
>>>>> +       .owner          =       THIS_MODULE,
>>>>> +};
>>>>
>>>>
>>>> You probably want to add Qdisc_class_ops here so that you can
>>>> dump the stats of each internal queue.
>>
>>
>> Hi Cong,
>>
>>     In the production scenario we are targeting, this priority queue must be
>> classless; being classful would only bloat the code for us. I don't see
>> making this queue classful as a problem per se, but I suggest leaving it as
>> a future improvement for when someone can come up with a useful scenario for
>> it.
> 
> 
> Take a look at sch_prio, it is fairly simple since your internal
> queues are just an array... Per-queue stats are quite useful
> in production, we definitely want to observe which queues are
> full which are not.
> 

DSprio cannot add Qdisc_class_ops without a rewrite of other queue 
disciplines, which doesn't seem desirable. Since the method cops->leaf 
is required (see register_qdisc()), we would need to replace the array 
struct sk_buff_head qdiscs[GKPRIO_MAX_PRIORITY] in struct 
gkprio_sched_data with the array struct Qdisc 
*queues[GKPRIO_MAX_PRIORITY] to be able to return a Qdisc in 
dsprio_leaf(). The problem with this change is that Qdisc does not have 
a method to dequeue from its tail. This new method may not even make 
sense in other queue disciplines. But without this method, 
gkprio_enqueue() cannot drop the lowest priority packet when the queue 
is full and an incoming packet has higher priority.

Nevertheless, I see your point on being able to observe the distribution 
of queued packets per priority. A solution for that would be to add the 
array __u32 qlen[GKPRIO_MAX_PRIORITY] in struct tc_gkprio_qopt. This 
solution even avoids adding overhead in the critical paths of DSprio. Do 
you see a better solution?

By the way, I've used GKPRIO_MAX_PRIORITY and other names that include 
"gkprio" above to reflect the version 1 of this patch that we are 
discussing. We will rename these identifiers for version 2 of this patch 
to replace "gkprio" with "dsprio".

[ ]'s
Michel Machado

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ