lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 8 Mar 2014 11:54:32 +0000
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Grant Likely <grant.likely@...aro.org>
Cc:	Philipp Zabel <p.zabel@...gutronix.de>,
	Sascha Hauer <s.hauer@...gutronix.de>,
	Rob Herring <robherring2@...il.com>,
	Mauro Carvalho Chehab <m.chehab@...sung.com>,
	Rob Herring <robh+dt@...nel.org>,
	Sylwester Nawrocki <s.nawrocki@...sung.com>,
	Laurent Pinchart <laurent.pinchart@...asonboard.com>,
	Tomi Valkeinen <tomi.valkeinen@...com>,
	Kyungmin Park <kyungmin.park@...sung.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>,
	"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
	Philipp Zabel <philipp.zabel@...il.com>
Subject: Re: [RFC PATCH] [media]: of: move graph helpers from
	drivers/media/v4l2-core to drivers/of

On Sat, Mar 08, 2014 at 01:05:50AM +0800, Grant Likely wrote:
> On Wed, 26 Feb 2014 15:48:49 +0100, Philipp Zabel <p.zabel@...gutronix.de> wrote:
> > Or two separate display controllers with two parallel outputs each, a
> > 4-port multiplexer, and an encoder (e.g. lvds):
> > 
> > 	,------------------.
> > 	| display    [port@0]--.   ,-------------.
> > 	| controller [port@1]-. `-[port@0]       |
> > 	`------------------´   `--[port@1]  mux  |
> > 	,------------------.   ,--[port@2]       |    ,-------------.
> > 	| display    [port@0]-´ ,-[port@3] [port@4]--[port] encoder |
> > 	| controller [port@1]--´   `-------------´    `-------------´
> > 	`------------------´
> 
> Similar question here, how is the above complex assembled into a logical
> device? Is there an overarching controller device for this component
> complex?

This is exactly the problem we discussed at Kernel Summit during the DRM
et.al. session.  It's the multiple component problem.  We have this solved
in a subsystem and firmware independent manner.

The way this works in imx-drm is that the graph of connections is scanned
to discover which component devices are required, and once all the
components are present, we initialise the logical device.  Further than
that, the driver structure is an implemention specific detail.

The starting point is the CPU facing part, because that's what provides
the logical interfaces, especially as there may be more than one logical
interface (which each of which may be entirely separate kernel
subsystems - display vs capture.)

> >  ,--------.    ,---------.
> >  | dc [port]--[port@0]   |
> >  `--------´    | encoder |    ,-----------.
> >                |   [port@1]--[port] panel |
> >                `---------´    `-----------´ 
> 
> In all of the above examples I see a unidirectional data flow. There are
> producer ports and consumer ports. Is there any (reasonable) possibility
> of peer ports that are bidirectional?

Generally, they will be unidirectional, but when you realise that you want
to be able to walk the graph from the CPU point of view, the data flow
direction is meaningless.  What you need to know is how the components
are connected together.

If you model the links as per the data flow direction, then while you may
be able to walk the display side - eg, dc -> encoder -> panel due to your
DT links, you wouldn't be able to walk from the CPU interface down to your
camera sensor - you would need to do a full scan of the DT for every
interation to reverse the direction of the linkages.

> > According to video-interfaces.txt, it is expected that endpoints contain
> > phandles pointing to the remote endpoint on both sides. I'd like to
> > leave this up to the more specialized bindings, but I can see that this
> > makes enumerating the connections starting from each device tree node
> > easier, for example at probe time.
> 
> This has come up in the past. That approach comes down to premature
> optimization at the expense of making the data structure more prone to
> inconsistency. I consider it to be a bad pattern.
> 
> Backlinks are good for things like dynamically linked lists that need to
> be fast and the software fully manages the links. For a data structure like
> the FDT it is better to have the data in one place, and one place only.
> Besides, computers are *good* at parsing data structures. :-)
> 
> I appreciate that existing drivers may be using the backlinks, but I
> strongly recommend not doing it for new users.

"Backlinks" doesn't convey to me exactly what you mean here.  Are you
talking about the ones in the opposite direction to data flow, or the
ones pointing upstream towards the CPU?  (I'm going to use upstream to
refer to the ones pointing towards the CPU.)

That matters: as I've said above, when creating logical devices, you
tend to start at the CPU interfaces and work your way down the graph.
Working the opposite way is much harder, especially if you hit some
kind of muxing arranagement which results in two sensors which combine
into one logical grabbing device.

> > If the back links are not provided in the device tree, a device at the
> > receiving end of a remote-endpoint phandle can only know about connected
> > remote ports after the kernel parsed the whole graph and created the
> > back links itself.
> 
> A whole tree parse isn't expensive. We do it all the time.

A while tree parse may not be expensive, but to do it multiple times
increases the cost manyfold.  Taking the first example above, it may
be needed to do that five times, one time per link, and that may be
just one subsystem.  It could very easily become 10 times, at which
point the expense is an order of magnitude more than it was.

We do have real hardware which is arranged as per that first diagram.
It's the iMX6Q - two IPUs each with two display controllers, giving a
total of four output video streams.  Those are then fed into a set of
muxes which then feed into a set of encoders.  Effectively, all those
muxes make a switching matrix, allowing you to connect any encoder to
any of the four output video streams - even connecting two or more
encoders to the same video stream.

There's HDMI, TV, LDB, and parallel.  So that's 20 links in total (in
reality, it's 16 because we don't represent the mux-to-encoder link
because it's not relevant.)  So, is 16x "a whole tree parse which isn't
expensive" still not expensive?

I'm not saying that we /need/ this, I'm merely pointing out that your
comment may be correct for one-offs, but that may not be what happens
in reality.

I think for the case I mention above, it's entirely possible to do
without the upstream links - whether that suits everyone, I'm not sure.

> > >  Graphs should be one direction only.
> > 
> > But this is not what the current binding in video-interfaces.txt
> > describes. I don't think it is a good idea to explicitly forbid
> > backlinks in this binding.
> 
> Nah, I disagree. The binding document /should/ forbid it to make it
> really clear what the common pattern is. The code doesn't have to
> enforce it (you don't want to break existing platforms), but the
> document should be blunt.
> 
> Another thought. In terms of the pattern, I would add a recommendation
> that there should be a way to identify ports of a particular type. ie.
> If I were using the pattern to implement an patch bay of DSP filters,
> where each input and output, then each target node should have a unique
> identifier property analogous to "interrupt-controller" or
> "gpio-controller". In this fictitious example I would probably choose
> "audiostream-input-port" and "audiostream-output-port" as empty
> properties in the port nodes. (I'm not suggesting a change to the
> existing binding, but making a recommendation to new users).

Given the amount of discussion of these bindings which are already in
the kernel, I have to ask a really obvious question: were they reviewed
before they were merged into the kernel?  It sounds like the above
discussion should already have happened...

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists