lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Mar 2021 03:18:15 +0200
From:   Vladimir Oltean <olteanv@...il.com>
To:     Florian Fainelli <f.fainelli@...il.com>
Cc:     Martin Blumenstingl <martin.blumenstingl@...glemail.com>,
        netdev@...r.kernel.org, Hauke Mehrtens <hauke@...ke-m.de>,
        andrew@...n.ch, vivien.didelot@...il.com, davem@...emloft.net,
        kuba@...nel.org
Subject: Re: lantiq_xrx200: Ethernet MAC with multiple TX queues

On Wed, Mar 24, 2021 at 04:07:47PM -0700, Florian Fainelli wrote:
> > What are the benefits of mapping packets to TX queues of the DSA master
> > from the DSA layer?
> 
> For systemport and bcm_sf2 this was explained in this commit:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d156576362c07e954dc36e07b0d7b0733a010f7d
> 
> in a nutshell, the switch hardware can return the queue status back to
> the systemport's transmit DMA such that it can automatically pace the TX
> completion interrupts. To do that we need to establish a mapping between
> the DSA slave and master that is comprised of the switch port number and
> TX queue number, and tell the HW to inspect the congestion status of
> that particular port and queue.
> 
> What this is meant to address is a "lossless" (within the SoC at least)
> behavior when you have user ports that are connected at a speed lower
> than that of your internal connection to the switch typically Gigabit or
> more. If you send 1Gbits/sec worth of traffic down to a port that is
> connected at 100Mbits/sec there will be roughly 90% packet loss unless
> you have a way to pace the Ethernet controller's transmit DMA, which
> then ultimately limits the TX completion of the socket buffers so things
> work nicely. I believe that per queue flow control was evaluated before
> and an out of band mechanism was preferred but I do not remember the
> details of that decision to use ACB.

Interesting system design.

Just to clarify, this port to queue mapping is completely optional, right?
You can send packets to a certain switch port through any TX queue of
the systemport?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ