[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <864l7g67qj.wl-marc.zyngier@arm.com>
Date: Tue, 02 Apr 2019 11:07:16 +0100
From: Marc Zyngier <marc.zyngier@....com>
To: Heyi Guo <guoheyi@...wei.com>
Cc: <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Jason Cooper <jason@...edaemon.net>,
wanghaibin 00208455 <wanghaibin.wang@...wei.com>
Subject: Re: MSI number limit for PCI hotplug under PCI bridge on ARM platform
On Tue, 02 Apr 2019 10:30:35 +0100,
Heyi Guo <guoheyi@...wei.com> wrote:
>
>
>
> On 2019/4/2 13:00, Marc Zyngier wrote:
> > On Mon, 01 Apr 2019 14:55:52 +0100,
> > Heyi Guo <guoheyi@...wei.com> wrote:
> >> Hi folks,
> >>
> >> In current kernel implementation for ARM platform, all devices under
> >> one PCI bridge share a same device ID and the total number of MSI
> >> interrupts is fixed at the first time any child device is allocating
> >> MSI. However, this may cause failure of allocating MSI if the system
> >> supports device hot-plug under the PCI bridge, which is possible for
> >> ARM virtual machine with generic pcie-to-pci-bridge and kernel
> >> config HOTPLUG_PCI_SHPC enabled.
> >>
> >> Does it make sense to add support for the above scenario? If it
> >> does, any suggestion for how to do that?
> > I don't think it makes much sense. You have the flexibility not to add
> > such a broken setup to your guests, and instead have enough pcie ports
> > so that you can always have an exact allocation and no DevID aliasing.
> >
> > The alternative is to dynamically grow the ITT for a given DevID,
> > which cannot be done without unmapping it first. This in turn will
> > result in interrupts being lost while the DevID was unmapped, and
> > they'd need to be pessimistically reinjected. This also involves a
> > substantial amount of data structure repainting, as you're pretty much
> > guaranteed not to be able to reuse the same LPI range.
> >
> > Given that this is arbitrarily self-inflicted, I'm not overly keen on
> > even trying to support this.
> SHPC hot plug under PCI-bridge for virtual machine is attracting us
> for its larger capacity, i.e. one bridge can have up to hot-plugable
> 31 devices, while each PCIe root port or downstream port can only
> have one.
I understand that. But the real reason why people use the PCI bridge
model instead of PCIe is because some VM models have a shortage of
ECAM config space. This is now solved in QEMU (see [1]), and you get
an extra 256MB of ECAM space. It should be enough for everybody.
> Anyway, the reason for not supporting this also makes sense to me.
Yeah, the ITS was never designed for aliasing bridges. If you can use
another method to achieve the same thing, I'd rather see that.
Thanks,
M.
[1] https://events.linuxfoundation.org/wp-content/uploads/2017/12/ARM-virt-3.0-and-Beyond-Towards-a-Better-Scalability-Eric-Auger-Red-Hat.pdf
--
Jazz is not dead, it just smell funny.
Powered by blists - more mailing lists