lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc70ddd1-0360-5c09-f03d-3560a0948f52@gmail.com>
Date:   Thu, 19 Sep 2019 10:12:02 -0700
From:   Florian Fainelli <f.fainelli@...il.com>
To:     Sascha Hauer <s.hauer@...gutronix.de>
Cc:     Vladimir Oltean <olteanv@...il.com>,
        netdev <netdev@...r.kernel.org>, Andrew Lunn <andrew@...n.ch>,
        Vivien Didelot <vivien.didelot@...oirfairelinux.com>,
        kernel@...gutronix.de
Subject: Re: dsa traffic priorization

On 9/19/19 1:44 AM, Sascha Hauer wrote:
> Hi Florian,
> 
> On Wed, Sep 18, 2019 at 10:41:58AM -0700, Florian Fainelli wrote:
>> On 9/18/19 7:36 AM, Vladimir Oltean wrote:
>>> Hi Sascha,
>>>
>>> On Wed, 18 Sep 2019 at 17:03, Sascha Hauer <s.hauer@...gutronix.de> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> We have a customer using a Marvell 88e6240 switch with Ethercat on one port and
>>>> regular network traffic on another port. The customer wants to configure two things
>>>> on the switch: First Ethercat traffic shall be priorized over other network traffic
>>>> (effectively prioritizing traffic based on port). Second the ethernet controller
>>>> in the CPU is not able to handle full bandwidth traffic, so the traffic to the CPU
>>>> port shall be rate limited.
>>>>
>>>
>>> You probably already know this, but egress shaping will not drop
>>> frames, just let them accumulate in the egress queue until something
>>> else happens (e.g. queue occupancy threshold triggers pause frames, or
>>> tail dropping is enabled, etc). Is this what you want? It sounds a bit
>>> strange to me to configure egress shaping on the CPU port of a DSA
>>> switch. That literally means you are buffering frames inside the
>>> system. What about ingress policing?
>>
>> Indeed, but I suppose that depending on the switch architecture and/or
>> nomenclature, configuring egress shaping amounts to determining ingress
>> for the ports where the frame is going to be forwarded to.
>>
>> For instance Broadcom switches rarely if at all mention ingress because
>> the frames have to originate from somewhere and be forwarded to other
>> port(s), therefore, they will egress their original port (which for all
>> practical purposes is the direct continuation of the ingress stage),
>> where shaping happens, which immediately influences the ingress shaping
>> of the destination port, which will egress the frame eventually because
>> packets have to be delivered to the final port's egress queue anyway.
>>
>>>
>>>> For reference the patch below configures the switch to their needs. Now the question
>>>> is how this can be implemented in a way suitable for mainline. It looks like the per
>>>> port priority mapping for VLAN tagged packets could be done via ip link add link ...
>>>> ingress-qos-map QOS-MAP. How the default priority would be set is unclear to me.
>>>>
>>>
>>> Technically, configuring a match-all rxnfc rule with ethtool would
>>> count as 'default priority' - I have proposed that before. Now I'm not
>>> entirely sure how intuitive it is, but I'm also interested in being
>>> able to configure this.
>>
>> That does not sound too crazy from my perspective.
>>
>>>
>>>> The other part of the problem seems to be that the CPU port has no network device
>>>> representation in Linux, so there's no interface to configure the egress limits via tc.
>>>> This has been discussed before, but it seems there hasn't been any consensous regarding how
>>>> we want to proceed?
>>
>> You have the DSA master network device which is on the other side of the
>> switch,
> 
> You mean the (in my case) i.MX FEC? Everything I do on this device ends
> up in the FEC itself and not on the switch, right?

Yes, we have a way to overload specific netdevice_ops and ethtool_ops
operations in order to use the i.MX FEC network device as configuration
entry point, say eth0, but have it operate on the switch, because when
the DSA switch got attached to the DSA master, we replaced some of that
network device's operations with ones that piggy back into the switch.
See net/dsa/master.c for details.
-- 
Florian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ