[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871qpzkj9k.ffs@tglx>
Date: Fri, 18 Nov 2022 23:08:55 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
linux-pci@...r.kernel.org, Bjorn Helgaas <bhelgaas@...gle.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Marc Zyngier <maz@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Dave Jiang <dave.jiang@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Kevin Tian <kevin.tian@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Logan Gunthorpe <logang@...tatee.com>,
Ashok Raj <ashok.raj@...el.com>, Jon Mason <jdmason@...zu.us>,
Allen Hubbe <allenbh@...il.com>,
"Ahmed S. Darwish" <darwi@...utronix.de>,
Reinette Chatre <reinette.chatre@...el.com>
Subject: Re: [patch 19/33] genirq/msi: Provide msi_desc::msi_data
On Wed, Nov 16 2022 at 15:28, Jason Gunthorpe wrote:
> On Fri, Nov 11, 2022 at 02:58:41PM +0100, Thomas Gleixner wrote:
>
>> +/**
>> + * struct msi_desc_data - Generic MSI descriptor data
>> + * @iobase: Pointer to the IOMEM base adress for interrupt callbacks
>> + * @cookie: Device cookie provided at allocation time
>> + *
>> + * The content of this data is implementation defined, e.g. PCI/IMS
>> + * implementations will define the meaning of the data.
>> + */
>> +struct msi_desc_data {
>> + void __iomem *iobase;
>> + union msi_dev_cookie cookie;
>> +};
>
> It would be nice to see the pci_msi_desc converted to a domain
> specific storage as well.
I looked into this and it gets ugly very fast.
The above has two parts:
iobase is domain specific and setup by the domain code
cookie is per interrupt allocation. That's where the instance
queue or whatever connects to the domain.
I can abuse the fields for PCI/MSI of course, but see below.
> Maybe could be written
>
> struct pci_msi_desc {
> }
> static_assert(sizeof(struct pci_msi_desc) <= sizeof(((struct msi_desc *)0)->domain_data));
>
> struct pci_msix_desc {
> }
> static_assert(sizeof(struct pci_msix_desc) <= sizeof(((struct msi_desc *)0)->domain_data));
>
> ideally hidden in the pci code with some irq_chip facing export API to
> snoop in the bits a few places need
I can't use that for the current combo legacy PCI/MSI code as I can't
split the irq chip implementations like I can with the new per device
domains.
And no, I'm not going to create separate code pathes which do the same
thing on different data structures just to pretend that it's all shiny.
> We've used 128 bits for the PCI descriptor, we might as well like
> everyone have all 128 bits for whatever they want to do
That's fine, but there are two parts to it:
1) Domain specific data
2) Per allocation specific data
#1 is data which the domain code puts there, e.g. in the prepare_desc()
callback
#2 is data which the usage site hands in which gives the domain and the
interrupt chip the information it needs
So the data structure should look like this:
struct msi_desc_data {
union msi_domain_cookie dom_cookie;
union msi_instance_cookie ins_cookie;
};
union msi_domain_cookie {
void __iomem *iobase;
void *ptr;
u64 value;
};
union msi_instance_cookie {
void *ptr;
u64 value;
};
Sure I could make both cookies plain u64, but I hate these forced type
casts and the above is simple to handle and understand.
So you get your 128 bit, but not per instance because that's a nightmare
to validate versus the allocation code which has to copy the data into
the msi descriptor, whatever it is (PASID, queue pointer ....).
Having two cookies makes a lot of sense to have a proper separation
between domain and usage site.
For IDXD the domain wants to store the iobase and needs a per allocation
PASID.
For your queue model, the domain wants a pointer to some device or
whatever specific things and the queue provides a pointer so that the
domain/chip can do the right thing for that particular instance.
For both sides, the domain and the allocation side something like the
above is sufficient.
Thanks,
tglx
Powered by blists - more mailing lists