lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190515132701.GD23276@lunn.ch>
Date:   Wed, 15 May 2019 15:27:01 +0200
From:   Andrew Lunn <andrew@...n.ch>
To:     Maxime Chevallier <maxime.chevallier@...tlin.com>
Cc:     Florian Fainelli <f.fainelli@...il.com>,
        Vivien Didelot <vivien.didelot@...il.com>,
        Russell King <linux@...linux.org.uk>, netdev@...r.kernel.org,
        "thomas.petazzoni@...tlin.com" <thomas.petazzoni@...tlin.com>,
        Antoine Tenart <antoine.tenart@...tlin.com>,
        Heiner Kallweit <hkallweit1@...il.com>
Subject: Re: dsa: using multi-gbps speeds on CPU port

On Wed, May 15, 2019 at 02:39:36PM +0200, Maxime Chevallier wrote:
> Hello everyone,
> 
> I'm working on a setup where I have a 88e6390X DSA switch connected to
> a CPU (an armada 8040) with 2500BaseX and RXAUI interfaces (we only use
> one at a time).

Hi Maxime

RXAUI should just work. By default, the CPU port is configured to its
maximum speed, so it should be doing 10Gbps. Slowing it down to
2500BaseX is however an issue.

> I'm facing a limitation with the current way to represent that link,
> where we use a fixed-link description in the CPU port, like this :
> 
> ...
> switch0: switch0@1 {
> 	...
> 	port@0 {
> 		reg = <0>;
> 		label = "cpu";
> 		ethernet = <&eth0>;
> 		phy-mode = "2500base-x";
> 		fixed-link {
> 			speed = <2500>;
> 			full-duplex;
> 		};
> 	};
> };
> ...
> 
> In this scenario, the dsa core will try to create a PHY emulating the
> fixed-link on the DSA port side. This can't work with link speeds above
> 1Gbps, since we don't have any emulation for these PHYs, which would be
> using C45 MMDs.
> 
> We could add support to emulate these modes, but I think there were some
> discussions about using phylink to support these higher speed fixed-link
> modes, instead of using PHY emulation.
> 
> However using phylink in master DSA ports seems to be a bit tricky,
> since master ports don't have a dedicated net_device, and instead
> reference the CPU-side netdevice (if I understood correctly).

I think you are getting your terminology wrong. 'master' is eth0 in
the example you gave above. CPU and DSA ports don't have netdev
structures, and so any PHY used with them is not corrected to a
netdev.

> I'll be happy to help on that, but before prototyping anything, I wanted
> to have your thougts on this, and see if you had any plans.

There are two different issues here.

1) Is using a fixed-link on a CPU or DSA port the right way to do this?
2) Making fixed-link support > 1G.

The reason i decided to use fixed-link on CPU and DSA ports is that we
already have all the code needed to configure a port, and an API to do
it, the adjust_link() callback. Things have moved on since then, and
we now have an additional API, .phylink_mac_config(). It might be
better to directly use that. If there is a max-speed property, create
a phylink_link_state structure, which has no reference to a netdev,
and pass it to .phylink_mac_config().

It is just an idea, but maybe you could investigate if that would
work.

On the master interface, the armada 8040, eth0, you still need
something. However, if you look at phylink_parse_fixedlink(), it puts
the speed etc into a phylink_link_state. It never instantiates a
fixed-phy. So i think that could be expanded to support higher speeds
without too much trouble. The interesting part is the IOCTL handler.

	Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ