lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 Feb 2017 10:14:53 -0500
From:   Boris Ostrovsky <boris.ostrovsky@...cle.com>
To:     Bjorn Helgaas <helgaas@...nel.org>
Cc:     Juergen Gross <jgross@...e.com>, Dan Streetman <ddstreet@...e.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        Stefano Stabellini <sstabellini@...nel.org>,
        Dan Streetman <dan.streetman@...onical.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        xen-devel@...ts.xenproject.org,
        linux-kernel <linux-kernel@...r.kernel.org>,
        linux-pci@...r.kernel.org
Subject: Re: [PATCH] xen: do not re-use pirq number cached in pci device msi
 msg data

On 02/22/2017 09:28 AM, Bjorn Helgaas wrote:
> On Tue, Feb 21, 2017 at 10:58:39AM -0500, Boris Ostrovsky wrote:
>> On 02/21/2017 10:45 AM, Juergen Gross wrote:
>>> On 21/02/17 16:31, Dan Streetman wrote:
>>>> On Fri, Jan 13, 2017 at 5:30 PM, Konrad Rzeszutek Wilk
>>>> <konrad.wilk@...cle.com> wrote:
>>>>> On Fri, Jan 13, 2017 at 03:07:51PM -0500, Dan Streetman wrote:
>>>>>> Revert the main part of commit:
>>>>>> af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>>>>
>>>>>> That commit introduced reading the pci device's msi message data to see
>>>>>> if a pirq was previously configured for the device's msi/msix, and re-use
>>>>>> that pirq.  At the time, that was the correct behavior.  However, a
>>>>>> later change to Qemu caused it to call into the Xen hypervisor to unmap
>>>>>> all pirqs for a pci device, when the pci device disables its MSI/MSIX
>>>>>> vectors; specifically the Qemu commit:
>>>>>> c976437c7dba9c7444fb41df45468968aaa326ad
>>>>>> ("qemu-xen: free all the pirqs for msi/msix when driver unload")
>>>>>>
>>>>>> Once Qemu added this pirq unmapping, it was no longer correct for the
>>>>>> kernel to re-use the pirq number cached in the pci device msi message
>>>>>> data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
>>>>>> pirqs when the pci device disables its MSI/MSIX vectors.
>>>>>>
>>>>>> This bug is causing failures to initialize multiple NVMe controllers
>>>>>> under Xen, because the NVMe driver sets up a single MSIX vector for
>>>>>> each controller (concurrently), and then after using that to talk to
>>>>>> the controller for some configuration data, it disables the single MSIX
>>>>>> vector and re-configures all the MSIX vectors it needs.  So the MSIX
>>>>>> setup code tries to re-use the cached pirq from the first vector
>>>>>> for each controller, but the hypervisor has already given away that
>>>>>> pirq to another controller, and its initialization fails.
>>>>>>
>>>>>> This is discussed in more detail at:
>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html
>>>>>>
>>>>>> Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")
>>>>>> Signed-off-by: Dan Streetman <dan.streetman@...onical.com>
>>>>> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
>>>> This doesn't seem to be applied yet, is it still waiting on another
>>>> ack?  Or maybe I'm looking at the wrong git tree...
>>> Am I wrong or shouldn't this go through the PCI tree? Konrad?
>> Konrad is away this week but since pull request for Xen tree just went
>> out we should probably wait until rc1 anyway (unless something big comes
>> up before that).
> I assume this should go via the Xen or x86 tree, since that's how most
> arch/x86/pci/xen.c patches have been handled, including af42b8d12f8a.
> If you think otherwise, let me know.

OK, I applied it to Xen tree's for-linus-4.11.

-boris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ