[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190911225841.GB5710@lunn.ch>
Date: Thu, 12 Sep 2019 00:58:41 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Robert Beckett <bob.beckett@...labora.com>
Cc: Ido Schimmel <idosch@...lanox.com>,
Florian Fainelli <f.fainelli@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Vivien Didelot <vivien.didelot@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Jiri Pirko <jiri@...nulli.us>
Subject: Re: [PATCH 0/7] net: dsa: mv88e6xxx: features to handle network
storms
> We have a setup as follows:
>
> Marvell 88E6240 switch chip, accepting traffic from 4 ports. Port 1
> (P1) is critical priority, no dropped packets allowed, all others can
> be best effort.
>
> CPU port of swtich chip is connected via phy to phy of intel i210 (igb
> driver).
>
> i210 is connected via pcie switch to imx6.
>
> When too many small packets attempt to be delivered to CPU port (e.g.
> during broadcast flood) we saw dropped packets.
>
> The packets were being received by i210 in to rx descriptor buffer
> fine, but the CPU could not keep up with the load. We saw
> rx_fifo_errors increasing rapidly and ksoftirqd at ~100% CPU.
>
>
> With this in mind, I am wondering whether any amount of tc traffic
> shaping would help?
Hi Robert
The model in linux is that you start with a software TC filter, and
then offload it to the hardware. So the user configures TC just as
normal, and then that is used to program the hardware to do the same
thing as what would happen in software. This is exactly the same as we
do with bridging. You create a software bridge and add interfaces to
the bridge. This then gets offloaded to the hardware and it does the
bridging for you.
So think about how your can model the Marvell switch capabilities
using TC, and implement offload support for it.
Andrew
Powered by blists - more mailing lists