[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8efbbe8f-62f8-4711-d069-44905c841709@amd.com>
Date: Mon, 15 May 2023 18:39:21 +0530
From: Nipun Gupta <nipun.gupta@....com>
To: Thomas Gleixner <tglx@...utronix.de>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"maz@...nel.org" <maz@...nel.org>, "jgg@...pe.ca" <jgg@...pe.ca>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: "git (AMD-Xilinx)" <git@....com>,
"Anand, Harpreet" <harpreet.anand@....com>,
"Jansen Van Vuuren, Pieter" <pieter.jansen-van-vuuren@....com>,
"Agarwal, Nikhil" <nikhil.agarwal@....com>,
"Simek, Michal" <michal.simek@....com>,
"Gangurde, Abhijit" <abhijit.gangurde@....com>,
"Cascon, Pablo" <pablo.cascon@....com>
Subject: Re: [PATCH] cdx: add MSI support for CDX bus
On 5/12/2023 11:45 PM, Thomas Gleixner wrote:
>
> Nipun!
>
> On Fri, May 12 2023 at 19:50, Nipun Gupta wrote:
>> On 5/11/2023 3:59 AM, Thomas Gleixner wrote:
>>> CDX is not any different than PCI. The actual "interrupt chip" is not
>>> part of the bus, it's part of the device and pretending that it is a bus
>>> specific thing is just running in to the same cul-de-sac sooner than
>>> later.
>>
>> I understand your viewpoint, but would state that CDX bus is somewhat
>> different than PCI in the sense that firmware is a controller for
>> all the devices and their configuration. CDX bus controller sends all
>> the write_msi_msg commands to firmware running on RPU over the RPmsg and
>> it is the firmware which interfaces with actual devices to pass this
>> information to devices in a way agreed between firmware and device. The
>> only way to pass MSI information to device is via firmware and CDX bus
>> controller is only entity which can communicate with the firmware for
>> this.
>
> Fair enough, but we wouldn't had this dicussion if the above information
> would have been part of the changelog. See?
Yes, you are correct, we will make sure to update and add more info in
the changelog in next spin.
>
>>> IIRC, there is a gap vs. interrupt affinity setting from user space,
>>> which is irrelevant for I2C, SPI etc. configured interrupt chips as they
>>> raise interrupt via an SoC interrupt pin and that's the entity which
>>> does the affinity management w/o requiring I2C/SPI. IIRC I posted a
>>> patch snippet to that effect in one of those lengthy PCI/MSI/IMS threads
>>> because that is also required for MSI storage which happens to be in
>>> queue memory and needs to be synchronized via some command channel. But
>>> I can't be bothered to search for it as it's a no-brainer to fix that
>>> up.
>>
>> Thanks for this analysis and pointing the hidden crucial issues with the
>> implementation. These needs to be fixed.
>>
>> As per your suggestion, we can add Firmware interaction code in the
>> irq_bus_sync_xx APIs. Another option is to change the
>> cdx_mcdi_rpc_async() API to atomic synchronous API.
>
> I'm not a great fan of that. Depending on how long this update takes the
> CPU will busy wait for it to complete with interrupts disabled and locks
> held.
Agree. we are also inclined towards using irq_bus_sync_xx APIs. This
would definitely solve the issue (#1 and #2) for the setup_irq which you
mentioned.
For MSI affinity, since GIC-ITS always return IRQ_SET_MASK_OK_DONE, the
irq_chip_write_msi_msg does not get called.
msi_domain_set_affinity(...)
ret = parent->chip->irq_set_affinity(...);
// For GIC ITS it always return IRQ_SET_MASK_OK_DONE
if (ret >= 0 && ret != IRQ_SET_MASK_OK_DONE) {
irq_chip_write_msi_msg(...);
}
Since CDX bus is specific to ARM and uses GIC ITS, it seems we do not
need to do anything here. Can you please share your opinion on this?
Thanks,
Nipun
>
>> We are evaluating both the solutions and will update the
>> implementation accordingly.
>
> Thanks,
>
> tglx
Powered by blists - more mailing lists