lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 29 May 2020 21:35:18 +0300
From:   Ido Schimmel <idosch@...sch.org>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, jiri@...lanox.com,
        mlxsw@...lanox.com, Ido Schimmel <idosch@...lanox.com>
Subject: Re: [PATCH net-next 00/14] mlxsw: Various trap changes - part 2

On Wed, May 27, 2020 at 12:50:17PM -0700, Jakub Kicinski wrote:
> On Wed, 27 May 2020 10:38:57 +0300 Ido Schimmel wrote:
> > There is no special sauce required to get a DHCP daemon working nor BFD.
> > It is supposed to Just Work. Same for IGMP / MLD snooping, STP etc. This
> > is enabled by the ASIC trapping the required packets to the CPU.
> > 
> > However, having a 3.2/6.4/12.8 Tbps ASIC (it keeps growing all the time)
> > send traffic to the CPU can very easily result in denial of service. You
> > need to have hardware policers and classification to different traffic
> > classes ensuring the system remains functional regardless of the havoc
> > happening in the offloaded data path.
> 
> I don't see how that's only applicable to a switch ASIC, though.
> Ingress classification, and rate limiting applies to any network 
> system.

This is not about ingress classification and rate limiting. The
classification does not happen at ingress. It happens throughout
different points in the pipeline, by hard-coded checks meant to identify
packets of interest. These checks look at both state (e.g., neighbour
miss, route miss) and packet fields (e.g., BGP packet that hit a local
route).

Similarly, the rate limiting does not happen at ingress. It only applies
to packets that your offloaded data path decided should go to the
attached host (the control plane). You cannot perform the rate limiting
at ingress for the simple reason that you still do not know if the
packet should reach the control plane.

> 
> > This control plane policy has been hard coded in mlxsw for a few years
> > now (based on sane defaults), but it obviously does not fit everyone's
> > needs. Different users have different use cases and different CPUs
> > connected to the ASIC. Some have Celeron / Atom while others have more
> > high-end Xeon CPUs, which are obviously capable of handling more packets
> > per second. You also have zero visibility into how many packets were
> > dropped by these hardware policers.
> 
> There are embedded Atom systems out there with multi-gig interfaces,
> they obviously can't ingest peak traffic, doesn't matter whether they
> are connected to a switch ASIC or a NIC.

Not the same thing. Every packet received by such systems should reach
the attached host. The control plane and the data plane are the same.
The whole point of this work is to rate limit packets coming from your
offloaded data plane to the control plane.

> 
> > By exposing these traps we allow users to tune these policers and get
> > visibility into how many packets they dropped. In the future also
> > changing their traffic class, so that (for example), packets hitting
> > local routes are scheduled towards the CPU before packets dropped due to
> > ingress VLAN filter.
> > 
> > If you don't have any special needs you are probably OK with the
> > defaults, in which case you don't need to do anything (no special
> > sauce).
> 
> As much as traps which forward traffic to the CPU fit the switch
> programming model, we'd rather see a solution that offloads constructs
> which are also applicable to the software world.

In the software world the data plane and the control plane are the same.
The CPU sees every packet. IGMP packets trigger MDB modifications,
packets that incurred a neighbour miss trigger an ARP / ND etc. These
are all control plane operations.

Once you separate your control plane from the data plane and offload the
latter to a capable hardware (e.g., switch ASIC), you create a need to
limit the packets coming from your data plane to the control plane. This
is a hardware-specific problem.

> 
> Sniffing dropped frames to troubleshoot is one thing, but IMHO traps
> which default to "trap" are a bad smell.

These traps exist today. They are programmed by mlxsw during
initialization. Without them basic stuff like DHCP/ARP/STP would not
work and you would need the "special sauce" you previously mentioned.

By exposing them via devlink-trap we allow users to configure their rate
from the offloaded data plane towards the control plane running on the
attached host. This is the only set operation you can do. Nothing else.

Anyway, I don't know how to argue with "bad smell". I held off on
sending the next patch set because this discussion was on-going, but at
this point I don't think it's possible for me to explain the problem and
solution in a clearer fashion, so I'll go ahead and send the patches.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ