lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67936300-7bfb-4f5e-9b80-ee339313fd61@linux.dev>
Date: Sat, 10 Feb 2024 00:25:33 +0800
From: Sui Jingfeng <sui.jingfeng@...ux.dev>
To: Maxime Ripard <mripard@...nel.org>, Lucas Stach <l.stach@...gutronix.de>,
 Russell King <linux+etnaviv@...linux.org.uk>,
 Christian Gmeiner <christian.gmeiner@...il.com>,
 David Airlie <airlied@...il.com>, Thomas Zimmermann <tzimmermann@...e.de>,
 dri-devel@...ts.freedesktop.org, etnaviv@...ts.freedesktop.org,
 linux-kernel@...r.kernel.org
Subject: Re: [etnaviv-next v13 7/7] drm/etnaviv: Add support for vivante GPU
 cores attached via PCI(e)

Hi,

On 2024/2/9 23:15, Maxime Ripard wrote:
> On Fri, Feb 09, 2024 at 12:02:48PM +0100, Daniel Vetter wrote:
>> On Thu, Feb 08, 2024 at 04:27:02PM +0100, Maxime Ripard wrote:
>>> On Wed, Feb 07, 2024 at 10:35:49AM +0100, Daniel Vetter wrote:
>>>> On Wed, Feb 07, 2024 at 01:27:59AM +0800, Sui Jingfeng wrote:
>>>>> The component helper functions are the glue, which is used to bind multiple
>>>>> GPU cores to a virtual master platform device. Which is fine and works well
>>>>> for the SoCs who contains multiple GPU cores.
>>>>>
>>>>> The problem is that usperspace programs (such as X server and Mesa) will
>>>>> search the PCIe device to use if it is exist. In other words, usperspace
>>>>> programs open the PCIe device with higher priority. Creating a virtual
>>>>> master platform device for PCI(e) GPUs is unnecessary, as the PCI device
>>>>> has been created by the time drm/etnaviv is loaded.
>>>>>
>>>>> we create virtual platform devices as a representation for the vivante GPU
>>>>> ip core. As all of subcomponent are attached via the PCIe master device,
>>>>> we reflect this hardware layout by binding all of the virtual child to the
>>>>> the real master.
>>>>>
>>>>> Signed-off-by: Sui Jingfeng <sui.jingfeng@...ux.dev>
>>>> Uh so my understanding is that drivers really shouldn't create platform
>>>> devices of their own. For this case here I think the aux-bus framework is
>>>> the right thing to use. Alternatively would be some infrastructure where
>>>> you feed a DT tree to driver core or pci subsystem and it instantiates it
>>>> all for you correctly, and especially with hotunplug all done right since
>>>> this is pci now, not actually part of the soc that cannot be hotunplugged.
>>> I don't think we need intermediate platform devices at all. We just need
>>> to register our GPU against the PCI device and that's it. We don't need
>>> a platform device, we don't need the component framework.
>> Afaik that's what this series does. The component stuff is for the
>> internal structure of the gpu ip, so that the same modular approach that
>> works for arm-soc also works for pci chips.
> But there should be a single PCI device, while we have multiple "DT"
> devices, right? Or is there several PCI devices too on that PCI card?


There is only a single PCI(e) device on that PCI(e) card, this single
PCI(e) device is selected as the component master. All other Hardware IP
blocks are shipped by the single PCI(e) master. It may includes Display
controllers, GPUs, video decoders, HDMI display bridges hardware unit etc.

But all of those Hardware IP share the same MMIO registers PCI BAR, this
PCI BAR is a kind of PCI(e) MEM resource. It is a relative *big* chunk,
as large as 32MB in address ranges for the JingJia Macro dGPU. Therefore,
I break the whole registers memory(MMIO) resource into smaller pieces by
creating platform device manually, manually created platform device is
called as virtual child in this series.

In short, we cut the whole into smaller piece, each smaller piece is a
single hardware IP block, thus deserve a single device driver. We will
have multiple platform devices if the dGPU contains multiple hardware
IP block. On the driver side, we bind all of the scattered driver module
with component.

Bind with another PCI(e) device is also possible, if it is going to be
used under the same driver. It is just that this will not need us to
create a platform device for it manually. We won't set its parent, they
are siblings then.
  

> Maxime

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ