lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 26 Dec 2017 15:50:43 -0800
From:   Tom Herbert <tom@...ntonium.net>
To:     David Miller <davem@...emloft.net>
Cc:     Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Roopa Prabhu <roopa@...ulusnetworks.com>,
        Rohit LastName <rohit@...ntonium.net>
Subject: Re: [PATCH v5 net-next 0/7] net: ILA notification mechanism and fixes

On Tue, Dec 26, 2017 at 2:29 PM, David Miller <davem@...emloft.net> wrote:
> From: Tom Herbert <tom@...ntonium.net>
> Date: Thu, 21 Dec 2017 11:33:25 -0800
>
>> This patch set adds support to get netlink notifications for ILA
>> routes when a route is used.
>>
>> This patch set contains:
>>
>> - General infrastructure for route notifications
>> - The ILA route notification mechanism
>> - Add net to ila build_state
>> - Add flush command to ila_xlat
>> - Fix use of rhashtable for latest fixes
>>
>> Route notifications will be used in conjunction with populating
>> ILA forwarding caches.
>
> Tom, this is just a wolf in sheep's clothing.
>
Dave,

> It's still a cache controllable by external entities.
>
Yep, that's the nature of the problem. In networks of even modest
scale we anticipate that we'll see the number of virtual addresses
(identifiers) far exceed the number of physical hosts. The mapping of
virtual to physical address is not aggregable, so at full we expect
10s of billions of these discrete mappings in a single network. No
single device will be able hold all these mappings, so they'll be
sharded amongst some number of routers. This works fine for
connectivity except that it would be nice to eliminate the triangular
routing by having the source perform encapsulation for destination
itself. So this is the motivation for a working set cache. It is an
optimization, but in networks like 3GPP, it's a big win to eliminate
anchor points in mobility.

> It still therefore has the DoS'ability aspects.
>
True, if implemented without consideration of DOS this is a very bad
thing as proven already by others. However if we know this going in
then DOS'ability can be mitigated or eliminated depending on the rest
of the implementation and architecture, similar to how SYN attacks can
be dealt with.

For example, suppose a device has 10G input link, we want a cache
entry to be usable for at least 30 seconds, and we have no control
over the users on the other side of the link (a typical eNodeB
scenario). That gives a worse case of 19M pps, 585M packets over 30
seconds. Assuming 64 bytes per cache entry that gets us to 37G of
memory needed in the host. That amount of memory is reasonable for a
networking device. Cost of memory should drop over next few years so
10X scaling within ten years seems feasible.

> You can keep reframing this thing you want out there, either by
> explicitly filling the cache in the kernel or doing it via userspace
> responding the netlink events, but it's still the same exact thing
> with the same set of problems.
>
I would point out that the attack surface area using a redirect
mechanism is _way_ less than request/response that was used by LISP or
OVS.

> I'm sorry, but I can't apply this series.  Nor any series that adds a
> DoS'able facility of forwarding/switching/route objects to the
> kernel.
>
Technically, this patch set was just adding route notificates that
facilitate but aren't a requirement for cache management. However, I
do sympathsize with your concerns. Scaling and DOS are precisely the
big problem to overcome in network virtualization and
identifier/locator split.

Happy Holidays!
Tom

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ