[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140801143922.GB5406@pd.tnic>
Date: Fri, 1 Aug 2014 16:39:22 +0200
From: Borislav Petkov <bp@...en8.de>
To: Jiang Liu <jiang.liu@...ux.intel.com>
Cc: "Rafael J . Wysocki" <rjw@...ysocki.net>,
Thomas Gleixner <tglx@...utronix.de>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-pci@...r.kernel.org, lkml <linux-kernel@...r.kernel.org>,
Jörg Rödel <joro@...tes.org>
Subject: Re: [PATCH] x86, irq: Keep IRQ assignment for PCI devices during
suspend/hibernation, bisected
On Fri, Aug 01, 2014 at 08:27:23PM +0800, Jiang Liu wrote:
> The above commit may cause failure of suspend/hiberrnation.
> The reason is:
> 1) With recent changers, we dynamically allocate irq number for IOAPIC
> pins.
> 2) The allocated irq will be released when suspending/hibernating.
> pci_disable_device()->pcibios_disable_irq()
> 3) When resuming, a different irq may be assigned to the PCI device.
> pci_enable_device()->pcibios_enable_irq()
> 4) Now the hardware will deliver interrupt to the new allocated irq
> but the interrupt handler is still registered on old irq, so it
> breaks the driver and causes failure of hibernation.
>
> The patch sent out by me is to fix this issue by keeping the allocated
> irq when suspend/hibernate. And seems it works as expected.
>
> But I still don't know why it causes IOMMU related warnings as:
> AMD-Vi: Event logged [IO_PAGE_FAULT device=00:12.0 domain=0x0009
> address=0x0000000000000000 flags=0x0000]
Well, the devices in those change during my test runs. Once it was
00:13.0, once this, once the GPU. This box has IOMMU so it might be
related to that somehow.
I could try to disable the IOMMU and see whether it still triggers. That
could tell us something.
--
Regards/Gruss,
Boris.
Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists