lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 May 2018 10:08:28 -0400
From:   Michel Machado <michel@...irati.com.br>
To:     Jamal Hadi Salim <jhs@...atatu.com>,
        Cong Wang <xiyou.wangcong@...il.com>
Cc:     Nishanth Devarajan <ndev2021@...il.com>,
        Jiri Pirko <jiri@...nulli.us>,
        David Miller <davem@...emloft.net>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Cody Doucette <doucette@...edu>
Subject: Re: [PATCH net-next] net:sched: add gkprio scheduler

> On 09/05/18 01:37 PM, Michel Machado wrote:
>> On 05/09/2018 10:43 AM, Jamal Hadi Salim wrote:
>>> On 08/05/18 10:27 PM, Cong Wang wrote:
>>>> On Tue, May 8, 2018 at 6:29 AM, Jamal Hadi Salim <jhs@...atatu.com> 
>>>> wrote:
> 
>>
>> I like the suggestion of extending skbmod to mark skbprio based on ds. 
>> Given that DSprio would no longer depend on the DS field, would you 
>> have a name suggestion for this new queue discipline since the name 
>> "prio" is currently in use?
>>
> 
> Not sure what to call it.
> My struggle is still with the intended end goal of the qdisc.
> It looks like prio qdisc except for the enqueue part which attempts
> to use a shared global queue size for all prios. I would have
> pointed to other approaches which use global priority queue pool
> which do early congestion detection like RED or variants like GRED but
> those use average values of the queue lengths not instantenous values 
> such as you do.
> I am tempted to say - based on my current understanding - that you dont
> need a new qdisc; rather you need to map your dsfields to skbprio
> (via skbmod) and stick with prio qdisc. I also think the skbmod
> mapping is useful regardless of this need.

A simplified description of what DSprio is meant to do is as follows: 
when a link is overloaded at a router, DSprio makes this router drop the 
packets of lower priority. These priorities are assigned by Gatekeeper 
in such a way that well behaving sources are favored (Theorem 4.1 of the 
Portcullis paper pointed out in my previous email). Moreover, attackers 
cannot do much better than well behaving sources (Theorem 4.2). This 
description is simplified because it omits many other components of 
Gatekeeper that affects the packets that goes to DSprio.

Like you, I'm all in for less code. If someone can instruct us on how to 
accomplish the same thing that our patch is doing, we would be happy to 
withdraw it. We have submitted this patch because we want to lower the 
bar to deploy Gatekeeper as much as possible, and requiring network 
operators willing to deploy Gatekeeper to keep patching the kernel is an 
operational burden.

>> What should be the range of priorities that this new queue discipline 
>> would accept? skb->prioriry is of type __u32, but supporting 2^32 
>> priorities would require too large of an array to index packets by 
>> priority; the DS field is only 6 bits long. Do you have a use case in 
>> mind to guide us here?
>>
> 
> Look at the priomap or prio2band arrangement on prio qdisc
> or pfifo_fast qdisc. You take an skbprio as an index into the array
> and retrieve a queue to enqueue to. The size of the array is 16.
> In the past this was based IIRC on ip precedence + 1 bit. Those map
> similarly to DS fields (calls selectors, assured forwarding etc). So
> no need to even increase the array beyond current 16.

What application is this change supposed to enable or help? I think this 
change should be left for when one can explain the need for it.

>>> 2) Dropping already enqueued packets will not work well for
>>> local feedback (__NET_XMIT_BYPASS return code is about the
>>> packet that has been dropped from earlier enqueueing because
>>> it is lower priority - it does not  signify anything with
>>> current skb to which actually just got enqueud).
>>> Perhaps (off top of my head) is to always enqueue packets on
>>> high priority when their limit is exceeded as long as lower prio has
>>> some space. Means youd have to increment low prio accounting if their
>>> space is used.
>>
>> I don't understand the point you are making here. Could you develop it 
>> further?
>>
> 
> Sorry - I was meaning NET_XMIT_CN
> If you drop an already enqueued packet - it makes sense to signify as
> such using NET_XMIT_CN
> this does not make sense for forwarded packets but it does
> for locally sourced packets.

Thank you for bringing this detail to our attention; we've overlooked 
the return code NET_XMIT_CN. We'll adopt it when the queue is full and 
the lowest priority packet in the queue is being dropped to make room 
for the higher-priority, incoming packet.

[ ]'s
Michel Machado

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ