lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 26 Dec 2015 00:41:26 +0100
From:	Andrew Lunn <andrew@...n.ch>
To:	Florian Fainelli <f.fainelli@...il.com>
Cc:	narmstrong@...libre.com, vivien.didelot@...oirfairelinux.com,
	netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH RFC 00/28] DSA: Restructure probing

> I am not really questioning whether the abstraction is working, this
> clearly does, what I am wondering is if the use of the component
> framework requires us to have master and slaves be implemented as strict
> platform devices, or if we have a choice in how we mix things together,
> like master is a platform device, but slaves could be I2C, SPI client or
> even platform devices themselves? That was not quite obvious to me by
> looking at the patches, sorry.

Slaves can be any sort of device. From what i've seen with GPU usage,
they are sometimes i2c devices, e.g. HDMI and audio framers. The way i
use slaves in the patches, is that all they need is a struct device
and an of node for matching to the master. The master also needs a
struct device. It makes no difference if these struct device are
embedded in a plaform_device, an struct i2c_client, struct phy_device
etc.

> >> mdio_bus@...dbeef {
> >> 	compatible = "acme,mdiobus";
> >> 	..
> >> 	switch0@0 {
> >> 		compatible = "marvell,mv88e6131";
> >> 		reg = <0>;
> >> 		dsa,addr = <1>;
> > 
> > This is not sufficient. It does not tell you which DSA cluster this
> > switch belongs to.
> 
> Would it work if we added an additional digit which is the cluster id or
> do you need a node grouping switches in a cluster with each other like
> you proposed in patch 20?

There is one cluster property at the moment, dsa,ethernet. So we could
actually use that as the cluster identifier, if we assume each cluster
has its own host ethernet.

> 
> The question being mostly, if we have the cluster id, address/index in
> switch tree, and a double linked list using phandles, is that good
> enough to figure out a topology or shall we really have nodes and
> sub-nodes with that (we would still need a list one linked list of
> phandles to figure out which ports are "dsa").

The dsa ports tell us how switches are interconnected. So once we have
all the switches we can determine the topology. What is not so clear
yet is how you know you have all the switches. The binding i proposed
meant the dsa platform device had a complete list of switch devices it
needed to form the cluster. With the distributed binding you are
suggesting, i don't see an easy way we can build the list of slave
devices.

> > Making MDIO controlled switches hang of MDIO can be done, but it does
> > require some bigger changes to the mdio code. I will need to look at
> > this again, but i think it starts in of_mdiobus_register_phy() which
> > needs to look a the compatibility field, and do something different to
> > phy_device_register(). mdio_unregister() will also need some work,
> > since it only expects phy devices on the mdio bus.
> 
> Correct, and this is really where I would start before we go on with the
> probing restructuring, because that could impact how the new DSA binding
> will have to be (re)defined. Right now, converting the Marvell drivers
> into individual platform devices is kind of a temporary solution because
> we do not have have a proper MDIO device model which is not a PHY.

Agreed. So i guess this is the next thing to work on.

> There are lots of good patches in this series that should probably be
> merged right away since they are all improvements.

Care to make a list of which you think should be submitted now?

Thanks
	Andrew
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ