lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 9 Feb 2024 16:15:19 +0100
From: Maxime Ripard <mripard@...nel.org>
To: Sui Jingfeng <sui.jingfeng@...ux.dev>, 
	Lucas Stach <l.stach@...gutronix.de>, Russell King <linux+etnaviv@...linux.org.uk>, 
	Christian Gmeiner <christian.gmeiner@...il.com>, David Airlie <airlied@...il.com>, 
	Thomas Zimmermann <tzimmermann@...e.de>, dri-devel@...ts.freedesktop.org, etnaviv@...ts.freedesktop.org, 
	linux-kernel@...r.kernel.org
Subject: Re: Re: Re: [etnaviv-next v13 7/7] drm/etnaviv: Add support for
 vivante GPU cores attached via PCI(e)

On Fri, Feb 09, 2024 at 12:02:48PM +0100, Daniel Vetter wrote:
> On Thu, Feb 08, 2024 at 04:27:02PM +0100, Maxime Ripard wrote:
> > On Wed, Feb 07, 2024 at 10:35:49AM +0100, Daniel Vetter wrote:
> > > On Wed, Feb 07, 2024 at 01:27:59AM +0800, Sui Jingfeng wrote:
> > > > The component helper functions are the glue, which is used to bind multiple
> > > > GPU cores to a virtual master platform device. Which is fine and works well
> > > > for the SoCs who contains multiple GPU cores.
> > > > 
> > > > The problem is that usperspace programs (such as X server and Mesa) will
> > > > search the PCIe device to use if it is exist. In other words, usperspace
> > > > programs open the PCIe device with higher priority. Creating a virtual
> > > > master platform device for PCI(e) GPUs is unnecessary, as the PCI device
> > > > has been created by the time drm/etnaviv is loaded.
> > > > 
> > > > we create virtual platform devices as a representation for the vivante GPU
> > > > ip core. As all of subcomponent are attached via the PCIe master device,
> > > > we reflect this hardware layout by binding all of the virtual child to the
> > > > the real master.
> > > > 
> > > > Signed-off-by: Sui Jingfeng <sui.jingfeng@...ux.dev>
> > > 
> > > Uh so my understanding is that drivers really shouldn't create platform
> > > devices of their own. For this case here I think the aux-bus framework is
> > > the right thing to use. Alternatively would be some infrastructure where
> > > you feed a DT tree to driver core or pci subsystem and it instantiates it
> > > all for you correctly, and especially with hotunplug all done right since
> > > this is pci now, not actually part of the soc that cannot be hotunplugged.
> > 
> > I don't think we need intermediate platform devices at all. We just need
> > to register our GPU against the PCI device and that's it. We don't need
> > a platform device, we don't need the component framework.
> 
> Afaik that's what this series does. The component stuff is for the
> internal structure of the gpu ip, so that the same modular approach that
> works for arm-soc also works for pci chips.

But there should be a single PCI device, while we have multiple "DT"
devices, right? Or is there several PCI devices too on that PCI card?

Maxime

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ