[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a5e5455-597f-7724-f992-32a2492c1e24@nvidia.com>
Date: Tue, 21 Apr 2020 14:08:13 +0100
From: Jon Hunter <jonathanh@...dia.com>
To: Manikanta Maddireddy <mmaddireddy@...dia.com>,
Dmitry Osipenko <digetx@...il.com>,
Thierry Reding <thierry.reding@...il.com>,
"Laxman Dewangan" <ldewangan@...dia.com>,
Wolfram Sang <wsa@...-dreams.de>,
"Vidya Sagar" <vidyas@...dia.com>
CC: <linux-i2c@...r.kernel.org>, <linux-tegra@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 1/2] i2c: tegra: Better handle case where CPU0 is busy
for a long time
On 21/04/2020 13:39, Manikanta Maddireddy wrote:
...
>> I am adding Manikanta to get some feedback on why we moved the PCI
>> suspend to the NOIRQ phase because it is not clear to me if we need to
>> do this here.
>>
>> Manikanta, can you comment on whether we really need to suspend Tegra
>> PCI during the noirq phase?
>
> PCIe subsystem driver implemented noirq PM callbacks, it will save & restore
> endpoint config space in these PM callbacks. PCIe controller should be
> available during this time, so noirq PM callbacks are implemented in Tegra
> PCIe driver.
>
> file: drivers/pci/pci-driver.c
> static const struct dev_pm_ops pci_dev_pm_ops = {
> ...
> .suspend_noirq = pci_pm_suspend_noirq,
> .resume_noirq = pci_pm_resume_noirq,
> ...
> };
Thanks, however, it is still not clear why this needs to be done during
this phase. When you say PCIe subsystem driver, specifically which
driver are you referring too? Are you referring to the
pci_pm_suspend_noirq() in the drivers/pci/pci-driver.c driver? If so,
just out of curiosity why does this need to be handled in the noirq phase?
Thanks
Jon
--
nvpublic
Powered by blists - more mailing lists