lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191113165300.GC27785@lunn.ch>
Date:   Wed, 13 Nov 2019 17:53:00 +0100
From:   Andrew Lunn <andrew@...n.ch>
To:     Vladimir Oltean <olteanv@...il.com>
Cc:     netdev <netdev@...r.kernel.org>
Subject: Re: Offloading DSA taggers to hardware

Hi Vladimir

I've not seen any hardware that can do this. There is an
Atheros/Qualcom integrated SoC/Switch where the 'header' is actually
just a field in the transmit/receive descriptor. There is an out of
tree driver for it, and the tag driver is very minimal. But clearly
this only works for integrated systems.

The other 'smart' features i've seen in NICs with respect to DSA is
being able to do hardware checksums. Freescale FEC for example cannot
figure out where the IP header is, because of the DSA header, and so
cannot calculate IP/TCP/UDP checksums. Marvell, and i expect some
other vendors of both MAC and switch devices, know about these
headers, and can do checksumming.

I'm not even sure there are any NICs which can do GSO or LRO when
there is a DSA header involved.

In the direction CPU to switch, i think many of the QoS issues are
higher up the stack. By the time the tagger is involved, all the queue
discipline stuff has been done, and it really is time to send the
frame. In the 'post buffer bloat world', the NICs hardware queue
should be small, so QoS is not so relevant once you reach the TX
queue. The real QoS issue i guess is that the slave interfaces have no
idea they are sharing resources at the lowest level. So a high
priority frames from slave 1 are not differentiated from best effort
frames from slave 2. If we were serious about improving QoS, we need a
meta scheduler across the slaves feeding the master interface in a QoS
aware way.

In the other direction, how much is the NIC really looking at QoS
information on the receive path? Are you thinking RPS? I'm not sure
any of the NICs commonly used today with DSA are actually multi-queue
and do RPS.

Another aspect here might be, what Linux is doing with DSA is probably
well past the silicon vendors expected use cases. None of the 'vendor
crap' drivers i've seen for these SOHO class switches have the level
of integration we have in Linux. We are pushing the limits of the
host/switch interfaces much more then vendors do, and so silicon
vendors are not so aware of the limits in these areas? But DSA is
being successful, vendors are taking more notice of it, and maybe with
time, the host/switch interface will improve. NICs might start
supporting GSO/LRO when there is a DSA header involved? Multi-queue
NICs become more popular in this class of hardware and RPS knows how
to handle DSA headers. But my guess would be, it will be for a Marvell
NIC paired with a Marvell Switch, Broadcom NIC paired with a Broadcom
switch, etc. I doubt there will be cross vendor support.

	Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ