lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875zh3ukoy.fsf@nanos.tec.linutronix.de>
Date:   Thu, 23 Jan 2020 00:37:33 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Evan Green <evgreen@...omium.org>,
        Bjorn Helgaas <helgaas@...nel.org>
Cc:     linux-pci@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
        Marc Zyngier <maz@...nel.org>, Christoph Hellwig <hch@....de>,
        Rajat Jain <rajatxjain@...il.com>
Subject: Re: [PATCH] PCI/MSI: Avoid torn updates to MSI pairs

Evan Green <evgreen@...omium.org> writes:
> On Wed, Jan 22, 2020 at 9:28 AM Bjorn Helgaas <helgaas@...nel.org> wrote:
>> I suspect this *is* a problem because I think disabling MSI doesn't
>> disable interrupts; it just means the device will interrupt using INTx
>> instead of MSI.  And the driver is probably not prepared to handle
>> INTx.
>>
>> PCIe r5.0, sec 7.7.1.2, seems relevant: "If MSI and MSI-X are both
>> disabled, the Function requests servicing using INTx interrupts (if
>> supported)."

Disabling MSI is not an option. Masking yes, but MSI does not have
mandatory masking. We already attempt masking on migration, which covers
only MSI-X reliably, but not all MSI incarnations.

So I assume that problem happens on a MSI interrupt, right?

>> Maybe the IRQ guys have ideas about how to solve this?

Maybe :)

> But don't we already do this in __pci_restore_msi_state():
>         pci_intx_for_msi(dev, 0);
>         pci_msi_set_enable(dev, 0);
>         arch_restore_msi_irqs(dev);
>
> I'd think if there were a chance for a line-based interrupt to get in
> and wedge itself, it would already be happening there.

That's a completely different beast. It's used when resetting a device
and for other stuff like virt state migration. That's not a model for
affinity changes of a live device.

> One other way you could avoid torn MSI writes would be to ensure that
> if you migrate IRQs across cores, you keep the same x86 vector number.
> That way the address portion would be updated, and data doesn't
> change, so there's no window. But that may not actually be feasible.

That's not possible simply because the x86 vector space is limited. If
we would have to guarantee that then we'd end up with a max of ~220
interrupts per system. Sufficient for your notebook, but the big iron
people would be not amused.

The real critical path here is the CPU hotplug path.

For regular migration between two online CPUs we use the 'migrate when
the irq is actually serviced ' mechanism. That might have the same issue
on misdesigned devices which are firing the next interrupt before the
one on the flight is serviced, but I haven't seen any reports with that
symptom yet.

But before I dig deeper into this, please provide the output of

'lscpci -vvv' and 'cat /proc/interrupts'

Thanks,

        tglx


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ