[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE=gft77xmkc6-4+h3WAp_4C7ra8XKSxcsqrVkBrYgXE0JPeSw@mail.gmail.com>
Date: Thu, 23 Jan 2020 09:55:39 -0800
From: Evan Green <evgreen@...omium.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Bjorn Helgaas <helgaas@...nel.org>,
linux-pci <linux-pci@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Marc Zyngier <maz@...nel.org>, Christoph Hellwig <hch@....de>,
Rajat Jain <rajatxjain@...il.com>
Subject: Re: [PATCH] PCI/MSI: Avoid torn updates to MSI pairs
On Thu, Jan 23, 2020 at 12:42 AM Thomas Gleixner <tglx@...utronix.de> wrote:
>
> Evan Green <evgreen@...omium.org> writes:
> > On Wed, Jan 22, 2020 at 3:37 PM Thomas Gleixner <tglx@...utronix.de> wrote:
> >> > One other way you could avoid torn MSI writes would be to ensure that
> >> > if you migrate IRQs across cores, you keep the same x86 vector number.
> >> > That way the address portion would be updated, and data doesn't
> >> > change, so there's no window. But that may not actually be feasible.
> >>
> >> That's not possible simply because the x86 vector space is limited. If
> >> we would have to guarantee that then we'd end up with a max of ~220
> >> interrupts per system. Sufficient for your notebook, but the big iron
> >> people would be not amused.
> >
> > Right, that occurred to me as well. The actual requirement isn't quite
> > as restrictive. What you really need is the old vector to be
> > registered on both the old CPU and the new CPU. Then once the
> > interrupt is confirmed to have moved we could release both the old
> > vector both CPUs, leaving only the new vector on the new CPU.
>
> Sure, and how can you guarantee that without reserving the vector on all
> CPUs in the first place? If you don't do that then if the vector is not
> available affinity setting would fail every so often and it would pretty
> much prevent hotplug if a to be migrated vector is not available on at
> least one online CPU.
>
> > In that world some SMP affinity transitions might fail, which is a
> > bummer. To avoid that, you could first migrate to a vector that's
> > available on both the source and destination CPUs, keeping affinity
> > the same. Then change affinity in a separate step.
>
> Good luck with doing that at the end of the hotplug routine where the
> CPU is about to vanish.
>
> > Or alternatively, you could permanently designate a "transit" vector.
> > If an interrupt fires on this vector, then we call all ISRs currently
> > in transit between CPUs. You might end up calling ISRs that didn't
> > actually need service, but at least that's better than missing edges.
>
> I don't think we need that. While walking the dogs I thought about
> invoking a force migrated interrupt on the target CPU, but haven't
> thought it through yet.
Yeah, I think the Intel folks did that in some tree of theirs too.
>
> >> 'lscpci -vvv' and 'cat /proc/interrupts'
> >
> > Here it is:
> > https://pastebin.com/YyxBUvQ2
>
> Hrm:
>
> Capabilities: [80] MSI-X: Enable+ Count=16 Masked-
>
> So this is weird. We mask it before moving it, so the tear issue should
> not happen on MSI-X. So the tearing might be just a red herring.
Mmm... sorry what? This is the complete entry for xhci:
00:14.0 USB controller: Intel Corporation Device 02ed (prog-if 30 [XHCI])
Subsystem: Intel Corporation Device 7270
Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium
>TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Interrupt: pin A routed to IRQ 124
Region 0: Memory at d1200000 (64-bit, non-prefetchable) [size=64K]
Capabilities: [70] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA
PME(D0-,D1-,D2-,D3hot+,D3cold+)
Status: D3 NoSoftRst+ PME-Enable+ DSel=0 DScale=0 PME-
Capabilities: [80] MSI: Enable+ Count=1/8 Maskable- 64bit+
Address: 00000000fee10004 Data: 402a
Capabilities: [90] Vendor Specific Information: Len=14 <?>
Kernel driver in use: xhci_hcd
>
> Let me stare into the code a bit.
Thanks, I appreciate the help.
-Evan
Powered by blists - more mailing lists