lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e34c7581-b5a0-58c4-dde0-cf50497417f8@gmail.com>
Date:   Tue, 2 Jan 2018 19:07:36 -0700
From:   David Ahern <dsahern@...il.com>
To:     Jiri Pirko <jiri@...nulli.us>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, jhs@...atatu.com,
        xiyou.wangcong@...il.com, mlxsw@...lanox.com, andrew@...n.ch,
        vivien.didelot@...oirfairelinux.com, f.fainelli@...il.com,
        michael.chan@...adcom.com, ganeshgr@...lsio.com,
        saeedm@...lanox.com, matanb@...lanox.com, leonro@...lanox.com,
        idosch@...lanox.com, jakub.kicinski@...ronome.com,
        simon.horman@...ronome.com, pieter.jansenvanvuuren@...ronome.com,
        john.hurley@...ronome.com, alexander.h.duyck@...el.com,
        ogerlitz@...lanox.com, john.fastabend@...il.com,
        daniel@...earbox.net
Subject: Re: [patch net-next v4 00/10] net: sched: allow qdiscs to share
 filter block instances

On 1/2/18 12:49 PM, Jiri Pirko wrote:
> DaveA, please consider following example:
> 
> $ tc qdisc add dev ens7 ingress
> $ tc qdisc
> qdisc ingress ffff: dev ens7 parent ffff:fff1 block 1
> 
> Now I have one device with one qdisc attached.
> 
> I will add some filters, for example:
> $ tc filter add dev ens7 ingress protocol ip pref 25 flower dst_ip 192.168.0.0/16 action drop
> 
> No sharing is happening. The user is doing what he is used to do.
> 
> Now user decides to share this filters with another device. As you can
> see above, the block created for ens7 qdisc instance has id "1".
> User can simply do:
> 
> tc qdisc add dev ens8 ingress block 1
> 
> And the block gets shared among ens7 ingress qdisc instance and ens8
> ingress qdisc instance.
> 
> What is wrong with this? The approach you suggest would disallow this

Conceptually, absolutely nothing. We all agree that a shared block
feature is needed. So no argument on sharing the filters across devices.

The disagreement is in how they should be managed. I think my last
response concisely captures my concerns -- the principle of least surprise.

So with the initial commands above, all is fine. Then someone is
debugging a problem or wants to add another filter to ens8, so they run:

$ tc filter add dev ens8 ingress protocol ip pref 25 flower dst_ip
192.168.1.0/16 action drop

Then traffic flows through ens7 break and some other user is struggling
to understand what just happened. That the new filter magically appears
on ens7 when the user operated on ens8 is a surprise. Nothing about that
last command acknowledges that it is changing a shared resource.

Consider the commands being run by different people, and a time span
between. Allowing the shared block to be configured by any device using
the block is just setting up users for errors and confusion.

> forcing user to explicitly create some block entity and then to attach
> it to qdisc instances. I don't really see good reason for it. Could you
> please clear this up for me?

It forces the user to acknowledge it is changing a resource that may be
shared by more than one device.

$ tc filter add dev ens8 ingress protocol ip pref 25 flower dst_ip
192.168.1.0/16 action drop
Error: This qdisc is a shared block. Use the block API to configure.

$ tc qdisc show dev ens8
qdisc ingress ffff: dev ens7 parent ffff:fff1 block 1

$ tc filter add block 1 protocol ip pref 25 flower dst_ip 192.168.1.0/16
action drop

Now there are no surprises. I have to know that ens8 is using block 1,
and I have to specify that block when adding a filter.


BTW, is there an option to list all devices using the same shared block
- short of listing all and grepping?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ