lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 16 May 2018 08:18:27 -0400
From:   Jamal Hadi Salim <jhs@...atatu.com>
To:     Michel Machado <michel@...irati.com.br>,
        Cong Wang <xiyou.wangcong@...il.com>
Cc:     Nishanth Devarajan <ndev2021@...il.com>,
        Jiri Pirko <jiri@...nulli.us>,
        David Miller <davem@...emloft.net>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Cody Doucette <doucette@...edu>
Subject: Re: [PATCH net-next] net:sched: add gkprio scheduler

Sorry I dropped this.

On 14/05/18 10:08 AM, Michel Machado wrote:
>> On 09/05/18 01:37 PM, Michel Machado wrote:

> 
> A simplified description of what DSprio is meant to do is as follows: 
> when a link is overloaded at a router, DSprio makes this router drop the 
> packets of lower priority.

Makes sense. Any priority based work-conserving scheduler will work
fine. The only small difference you have with prio qdisc is you
drop an enqueued low prio packet to make room for a new higher prio
queue. Can you look at pfifo_head_drop qdisc to see if it suffices? It
may not be: In such a case, I would suggest a hybrid between
pfifo_head_drop and pfifo_fast for the new qdisc.
[Cong has suggested to write a classful qdisc but it may be sufficient
to just replicate what pfifo_fast does since it tracks virtual queues]

> These priorities are assigned by Gatekeeper 
> in such a way that well behaving sources are favored (Theorem 4.1 of the 
> Portcullis paper pointed out in my previous email). Moreover, attackers 
> cannot do much better than well behaving sources (Theorem 4.2). This 
> description is simplified because it omits many other components of 
> Gatekeeper that affects the packets that goes to DSprio.
> 

I am sorry - I have no access to this document so dont know what these
theorems are. I understand your requirements. 1) You are looking to use
priority identifiers to select queues. 2) You want to prioritize
treatment of favorably tagged packets. The enqueueing will drop
lower priority packets to make space for higher priority under
congestion. Did i miss anything?
For #1 my suggestion is to use skbmod to set the priority tag.
For #2 if you didnt have to drop at enqueue time you could have
used any of the existing priority favoring qdiscs which recognize
skb->priority. Otherwise as i suggested above look at
pfifo_fast/pfifo_head_drop

> Like you, I'm all in for less code. If someone can instruct us on how to 
> accomplish the same thing that our patch is doing, we would be happy to 
> withdraw it. We have submitted this patch because we want to lower the 
> bar to deploy Gatekeeper as much as possible, and requiring network 
> operators willing to deploy Gatekeeper to keep patching the kernel is an 
> operational burden.
> 

So I would suggest you keep this real simple - especially if you want to
go backwards in kernels. For existing kernels you can implement the
basic policies of what you need by using prio qdisc with a combination
of a classifier that knows how to match on dsfield (trivial to do with
u32) and skbedit action to tag the skb->priority. Then let prio qdisc
use the priomap to select the queue.
If you must drop enqueued low prio packets then you may need the new
qdisc. And to optimize, you will need the skbmod change.
I really think it is a bad idea to encapsulate the classifier in the
qdisc.


>> Look at the priomap or prio2band arrangement on prio qdisc
>> or pfifo_fast qdisc. You take an skbprio as an index into the array
>> and retrieve a queue to enqueue to. The size of the array is 16.
>> In the past this was based IIRC on ip precedence + 1 bit. Those map
>> similarly to DS fields (calls selectors, assured forwarding etc). So
>> no need to even increase the array beyond current 16.
> 
> What application is this change supposed to enable or help? I think this 
> change should be left for when one can explain the need for it.
> 

I meant to take a look at the prio map. It is an array of size 16 which
holds the skb->priority implicit classifier (prio, pfifo_fast etc).
A packets skb priority is used as an index into this array and from the 
result a queue is selected to put the packet onto.
The map of this array can be configured from user space. I was saying
earlier that it may be tempting to make a size 64 array to map the
possible dsfields - in practise that has never been pragmatic (so 16 was
sufficient).


cheers,
jamal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ