[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZcYGWEG8eqAiqqai@phenom.ffwll.local>
Date: Fri, 9 Feb 2024 12:02:48 +0100
From: Daniel Vetter <daniel@...ll.ch>
To: Maxime Ripard <mripard@...nel.org>
Cc: Sui Jingfeng <sui.jingfeng@...ux.dev>,
Lucas Stach <l.stach@...gutronix.de>,
Russell King <linux+etnaviv@...linux.org.uk>,
Christian Gmeiner <christian.gmeiner@...il.com>,
David Airlie <airlied@...il.com>,
Thomas Zimmermann <tzimmermann@...e.de>,
dri-devel@...ts.freedesktop.org, etnaviv@...ts.freedesktop.org,
linux-kernel@...r.kernel.org
Subject: Re: Re: [etnaviv-next v13 7/7] drm/etnaviv: Add support for vivante
GPU cores attached via PCI(e)
On Thu, Feb 08, 2024 at 04:27:02PM +0100, Maxime Ripard wrote:
> On Wed, Feb 07, 2024 at 10:35:49AM +0100, Daniel Vetter wrote:
> > On Wed, Feb 07, 2024 at 01:27:59AM +0800, Sui Jingfeng wrote:
> > > The component helper functions are the glue, which is used to bind multiple
> > > GPU cores to a virtual master platform device. Which is fine and works well
> > > for the SoCs who contains multiple GPU cores.
> > >
> > > The problem is that usperspace programs (such as X server and Mesa) will
> > > search the PCIe device to use if it is exist. In other words, usperspace
> > > programs open the PCIe device with higher priority. Creating a virtual
> > > master platform device for PCI(e) GPUs is unnecessary, as the PCI device
> > > has been created by the time drm/etnaviv is loaded.
> > >
> > > we create virtual platform devices as a representation for the vivante GPU
> > > ip core. As all of subcomponent are attached via the PCIe master device,
> > > we reflect this hardware layout by binding all of the virtual child to the
> > > the real master.
> > >
> > > Signed-off-by: Sui Jingfeng <sui.jingfeng@...ux.dev>
> >
> > Uh so my understanding is that drivers really shouldn't create platform
> > devices of their own. For this case here I think the aux-bus framework is
> > the right thing to use. Alternatively would be some infrastructure where
> > you feed a DT tree to driver core or pci subsystem and it instantiates it
> > all for you correctly, and especially with hotunplug all done right since
> > this is pci now, not actually part of the soc that cannot be hotunplugged.
>
> I don't think we need intermediate platform devices at all. We just need
> to register our GPU against the PCI device and that's it. We don't need
> a platform device, we don't need the component framework.
Afaik that's what this series does. The component stuff is for the
internal structure of the gpu ip, so that the same modular approach that
works for arm-soc also works for pci chips.
Otherwise we end up with each driver hand-rolling that stuff, which is
defacto what both nouveau and amdgpu do (intel hw is too much a mess for
that component-driver based approach to actually work reasonably well).
Cheers, Sima
> > I think I've seen some other pci devices from arm soc designs that would
> > benefit from this too, so lifting this logic into a pci function would
> > make sense imo.
>
> Nouveau supports both iirc.
>
> Maxime
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists