[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d3f6d627290bb1a6a1fcfdfd5fad915578453e02.camel@sipsolutions.net>
Date: Mon, 13 Feb 2023 18:54:49 +0100
From: Johannes Berg <johannes@...solutions.net>
To: Vincent Whitchurch <vincent.whitchurch@...s.com>,
Richard Weinberger <richard@....at>,
Anton Ivanov <anton.ivanov@...bridgegreys.com>
Cc: robh@...nel.org, devicetree@...ts.infradead.org,
linux-um@...ts.infradead.org, linux-kernel@...r.kernel.org,
kernel@...s.com
Subject: Re: [PATCH] virt-pci: add platform bus support
On Fri, 2023-01-27 at 15:30 +0100, Vincent Whitchurch wrote:
> This driver registers PCI busses, but the underlying virtio protocol
> could just as easily be used to provide a platform bus instead. If the
> virtio device node in the devicetree indicates that it's compatible with
> simple-bus, register platform devices instead of handling it as a PCI
> bus.
>
> Only one platform bus is allowed and the logic MMIO region for the
> platform bus is placed at an arbitrarily-chosen address away from the
> PCI region.
So ... hm. I'm not sure I _like_ this so much. It feels decidedly
strange to put it this way.
But it looks like Richard already applied it, so I suppose look at this
as kind of a retroactive information gathering. :)
> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@...s.com>
> ---
> My first approach to getting platform drivers working on UML was by
> adding a minimal PCI-to-platform bridge driver, which worked without
> modifications to virt-pci, but that got shot down:
>
> https://lore.kernel.org/lkml/20230120-simple-mfd-pci-v1-1-c46b3d6601ef@axis.com/
Reading through that ... OK that isn't fun either :-)
Sounds like there's a use case for something else though, but the PCI
IDs issue also makes that thorny.
> @@ -48,6 +51,7 @@ struct um_pci_device_reg {
>
> static struct pci_host_bridge *bridge;
> static DEFINE_MUTEX(um_pci_mtx);
> +static struct um_pci_device *um_pci_platform_device;
> static struct um_pci_device_reg um_pci_devices[MAX_DEVICES];
> static struct fwnode_handle *um_pci_fwnode;
> static struct irq_domain *um_pci_inner_domain;
> @@ -480,6 +484,9 @@ static void um_pci_handle_irq_message(struct virtqueue *vq,
> struct virtio_device *vdev = vq->vdev;
> struct um_pci_device *dev = vdev->priv;
>
> + if (!dev->irq)
> + return;
>
Does that mean platform devices don't have interrupts, or does that mean
not all of them must have interrupts?
I'll note that this also would allow the device to send an MSI which
feels a bit wrong? But I guess it doesn't really matter.
So let me ask this: Conceptually, wouldn't the "right" way to handle
this be a new virtio device and protocol and everything, with a new
driver to handle it? I realise that would likely lead to quite a bit of
code duplication, for now I just want to understand the concept here a
bit better.
Such a driver would then define its own messages, requiring (I guess)
the equivalents of
* VIRTIO_PCIDEV_OP_INT,
* VIRTIO_PCIDEV_OP_MMIO_READ,
* VIRTIO_PCIDEV_OP_MMIO_WRITE, and
* (maybe) VIRTIO_PCIDEV_OP_MMIO_MEMSET,
but not
* VIRTIO_PCIDEV_OP_CFG_READ,
* VIRTIO_PCIDEV_OP_CFG_WRITE,
* VIRTIO_PCIDEV_OP_MSI and
* VIRTIO_PCIDEV_OP_PME,
right?
How much code would we actually duplicate? Most of virt-pci is dedicated
to the mess of PCI MSI domains, bridges, etc.
The limitation to a single device feels odd, and the fact that you have
to put it into DT under the PCI also seems odd. OTOH, the platform
device has to be _somewhere_ right?
I guess I'm not really sure how to use it, but I suppose that's OK too
:)
Thanks,
johannes
Powered by blists - more mailing lists