[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87frnby5h3.ffs@tglx>
Date: Thu, 28 Nov 2024 12:15:20 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Jason Gunthorpe <jgg@...dia.com>, Eric Auger <eric.auger@...hat.com>
Cc: Robin Murphy <robin.murphy@....com>, Alex Williamson
<alex.williamson@...hat.com>, Nicolin Chen <nicolinc@...dia.com>,
maz@...nel.org, bhelgaas@...gle.com, leonro@...dia.com,
shameerali.kolothum.thodi@...wei.com, dlemoal@...nel.org,
kevin.tian@...el.com, smostafa@...gle.com,
andriy.shevchenko@...ux.intel.com, reinette.chatre@...el.com,
ddutile@...hat.com, yebin10@...wei.com, brauner@...nel.org,
apatel@...tanamicro.com, shivamurthy.shastri@...utronix.de,
anna-maria@...utronix.de, nipun.gupta@....com,
marek.vasut+renesas@...lbox.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH RFCv1 0/7] vfio: Allow userspace to specify the address
for each MSI vector
On Wed, Nov 20 2024 at 10:03, Jason Gunthorpe wrote:
> On Wed, Nov 20, 2024 at 02:17:46PM +0100, Eric Auger wrote:
>> > Yeah, I wasn't really suggesting to literally hook into this exact
>> > case; it was more just a general observation that if VFIO already has
>> > one justification for tinkering with pci_write_msi_msg() directly
>> > without going through the msi_domain layer, then adding another
>> > (wherever it fits best) can't be *entirely* unreasonable.
>
> I'm not sure that we can assume VFIO is the only thing touching the
> interrupt programming.
Correct.
> I think there is a KVM path, and also the /proc/ path that will change
> the MSI affinity on the fly for a VFIO created IRQ. If the platform
> requires a MSI update to do this (ie encoding affinity in the
> add/data, not using IRQ remapping HW) then we still need to ensure the
> correct MSI address is hooked in.
Yes.
>> >> Is it possible to do this with the existing write_msi_msg callback on
>> >> the msi descriptor? For instance we could simply translate the msg
>> >> address and call pci_write_msi_msg() (while avoiding an infinite
>> >> recursion). Or maybe there should be an xlate_msi_msg callback we can
>> >> register. Or I suppose there might be a way to insert an irqchip that
>> >> does the translation on write. Thanks,
>> >
>> > I'm far from keen on the idea, but if there really is an appetite for
>> > more indirection, then I guess the least-worst option would be yet
>> > another type of iommu_dma_cookie to work via the existing
>> > iommu_dma_compose_msi_msg() flow,
>
> For this direction I think I would turn iommu_dma_compose_msi_msg()
> into a function pointer stored in the iommu_domain and have
> vfio/iommufd provide its own implementation. The thing that is in
> control of the domain's translation should be providing the msi_msg.
Yes. The resulting cached message should be writeable as is.
>> > update per-device addresses direcitly. But then it's still going to
>> > need some kind of "layering violation" for VFIO to poke the IRQ layer
>> > into re-composing and re-writing a message whenever userspace feels
>> > like changing an address
>
> I think we'd need to get into the affinity update path and force a MSI
> write as well, even if the platform isn't changing the MSI for
> affinity. Processing a vMSI entry update would be two steps where we
> update the MSI addr in VFIO and then set the affinity.
The affinity callback of the domain/chip can return IRQ_SET_MASK_OK_DONE
which prevents recomposing and writing the message.
So you want a explicit update/write of the message similar to what
msi_domain_activate() does.
Thanks,
tglx
Powered by blists - more mailing lists