lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y+uamTHJSaNHvTbU@lunn.ch>
Date:   Tue, 14 Feb 2023 15:28:41 +0100
From:   Andrew Lunn <andrew@...n.ch>
To:     Wei Fang <wei.fang@....com>
Cc:     Alexander Lobakin <alexandr.lobakin@...el.com>,
        Shenwei Wang <shenwei.wang@....com>,
        Clark Wang <xiaoning.wang@....com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "edumazet@...gle.com" <edumazet@...gle.com>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "pabeni@...hat.com" <pabeni@...hat.com>,
        "simon.horman@...igine.com" <simon.horman@...igine.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        dl-linux-imx <linux-imx@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V2 net-next] net: fec: add CBS offload support

> Sorry, I'm not very familiar with the configuration of pure software implementation
> of CBS. I tried to configure the CBS like the following. The bandwidth of queue 1 was
> set to 30Mbps. And the queue 2 is set to 20Mbps. Then one stream were sent the
> queue 1 and the rate was 50Mbps, the link speed was 1Gbps. But the result seemed that
> the CBS did not take effective.

I'm not that familiar with CBS, but that is what i would expect. You
are over subscribing the queue by 20Mbps, so that 20Mbps gets
relegated to best effort. And since you have a 1G link, you have
plenty of best effort bandwidth.

As with most QoS queuing, it only really makes a different to packet
loss when you oversubscribe the link as a whole.

So with your 30Mbps + 20Mbps + BE configuration on a 1G link, send
50Mbps + 0Mbps + 1Gbps. 30Mbps of your 50Mbps stream should be
guaranteed to arrive at the destination. The remaining 20Mbps needs to
share the remaining 970Mbps of link capacity with the 1G of BE
traffic. So you would expect to see a few extra Kbps of queue #1
traffic arriving and around 969Mbps of best effort traffic.

However, that is not really the case i'm interested in. This
discussion started from the point that autoneg has resulted in a much
smaller link capacity. The link is now over subscribed by the CBS
configuration. Should the hardware just give up and go back to default
behaviour, or should it continue to do some CBS?

Set lets start with a 7Mbps queue 1 and 5Mbps queue 2, on a link which
auto negs to 100Mbps. Generate traffic of 8Mbps, 6Mpbs and 100Mbps
BE. You would expect ~7Mbps, ~5Mbps and 88Mbps to arrive at the link
peer. Your two CBS flows get there reserved bandwidth, plus a little
of the BE. BE gets whats remains of the link. Test that and make sure
that is what actually happens with software CBS, and with your TC
offload to hardware.

Now force the link down to 10Mbps. The CBS queues then over subscribe
the link. Keep with the traffic generator producing 8Mbps, 6Mpbs and
100Mbps BE. What i guess the software CBS will do is 7Mbps, 3Mbps and
0 BE. You should confirm this with testing.

What does this mean for TC offload? You should be aiming for the same
behaviour. So even when the link is over subscribed, you should still
be programming the hardware.

   Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ