[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87mtlk84ae.ffs@tglx>
Date: Wed, 01 Dec 2021 21:21:29 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Logan Gunthorpe <logang@...tatee.com>,
LKML <linux-kernel@...r.kernel.org>,
Bjorn Helgaas <helgaas@...nel.org>,
Marc Zygnier <maz@...nel.org>,
Alex Williamson <alex.williamson@...hat.com>,
Kevin Tian <kevin.tian@...el.com>,
Megha Dey <megha.dey@...el.com>,
Ashok Raj <ashok.raj@...el.com>, linux-pci@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jon Mason <jdmason@...zu.us>,
Dave Jiang <dave.jiang@...el.com>,
Allen Hubbe <allenbh@...il.com>, linux-ntb@...glegroups.com,
linux-s390@...r.kernel.org, Heiko Carstens <hca@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ibm.com>, x86@...nel.org,
Joerg Roedel <jroedel@...e.de>,
iommu@...ts.linux-foundation.org
Subject: Re: [patch 21/32] NTB/msi: Convert to msi_on_each_desc()
Jason,
On Wed, Dec 01 2021 at 14:14, Jason Gunthorpe wrote:
> On Wed, Dec 01, 2021 at 06:35:35PM +0100, Thomas Gleixner wrote:
>> On Wed, Dec 01 2021 at 09:00, Jason Gunthorpe wrote:
>> But NTB is operating through an abstraction layer and is not a direct
>> PCIe device driver.
>
> I'm not sure exactly how NTB seems to be split between switchtec and
> the ntb code, but since the ntbd code seems to be doing MMIO touches,
> it feels like part of a PCIe driver?
It's a maze of magic PCIe driver with callbacks left and right into the
underlying NTB hardware driver which itself does some stuff on its own
and also calls back into the underlying PCIe driver.
Decomposing that thing feels like being trapped in the Labyrinth of
Knossos. But let's ignore that for a moment.
>> The VFIO driver does not own the irq chip ever. The irq chip is of
>> course part of the underlying infrastructure. I never asked for that.
>
> That isn't quite what I ment.. I ment the PCIe driver cannot create
> the domain or make use of the irq_chip until the VFIO layer comes
> along and provides the struct device. To me this is backwards
> layering, the interrupts come from the PCIe layer and should exist
> independently from VFIO.
See below.
>> When it allocates a slice for whatever usage then it also
>> allocates the IMS interrupts (though the VFIO people want to
>> have only one and do the allocations later on demand).
>>
>> That allocation cannot be part of the PCI/MSIx interrupt
>> domain as we already agreed on.
>
> Yes, it is just an open question of where the new irq_domain need to
> reside
The irqdomain is created by and part of the physical device. But that
does not mean that the interrupts which are allocated from that irq
domain are stored in the physical device representation.
Going by that logic, the PCI/MSI domain would store all MSI[X]
interrupts which are allocated on some root bridge or wherever.
They are obviously stored per PCI device, but the irqdomain is owned
e.g. by the underlying IOMMU zone.
The irqdomain is managing and handing out resources. Like any other
resource manager does.
See?
>> 1) Storage
>>
>> A) Having "subdevices" solves the storage problem nicely and
>> makes everything just fall in place. Even for a purely
>> physical multiqueue device one can argue that each queue is a
>> "subdevice" of the physical device. The fact that we lump them
>> all together today is not an argument against that.
>
> I don't like the idea that queue is a device, that is trying to force
> a struct device centric world onto a queue which doesn't really want
> it..
Here we are at the point where we agree to disagree.
Of course a queue is a resource, but what prevents us to represent a
queue as a carved out subdevice of the physical device?
Look at it from the VF point of view. If VFs are disabled then all
resources belong to the physical device. If VFs are enabled then the
hardware/firmware splits the resources into separate subdevices which
have their own device representation, interrupt storage etc.
As VFs have a scalability limitation due to the underlying PCIe
restrictions the step we are talking about now is to split the queues up
in software which means nothing else than creating a software
representation of finer grained and more scalable subdevices.
So why would we want to pretend that these are not devices?
They are from a conceptual and topology view a subdevice of the physical
device. Just because they are named queues does not make it any
different.
Let's look at VFIO again. If VFIO passes a VF through then it builds a
wrapper around the VF device.
So if a queue is represented as a subdevice, then VFIO can just build
a wrapper around that subdevice.
But with your model, VFIO has to create a device, request a queue,
wrestle the interrupts in place, etc. Which is exactly the opposite of
how VFs are handled.
So again, why would we want to make software managed subdevices look
exactly the opposite way like hardware/firmware managed subdevices?
Let me also look at the cdev which is exposed by the phsyical device.
The cdev is nothing else than a software vehicle to create a /dev/
node. (ignore the switchtec case which (ab)uses the cdev to connect to
NTB). The cdev allows user space to allocate/release a resource.
Of course it can just allocate a queue and stick the necessary
interrupts into the physical device MSI descriptor storage.
But there is no reason why the queue allocation cannot allocate a
subdevice representing the queue.
That makes the VFIO and the cdev case just using the same underlying
representation which can expose it's properties via the underlying
device.
Which in turn is consistent all over the place and does not require any
special case for anything. Neither for interrupts nor for anything else.
Thanks,
tglx
Powered by blists - more mailing lists