lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 31 Oct 2022 06:39:24 -0500
From:   Bjorn Helgaas <helgaas@...nel.org>
To:     Nirmal Patel <nirmal.patel@...ux.intel.com>
Cc:     Jon Derrick <jonathan.derrick@...ux.dev>,
        Adrian Huang <ahuang12@...ovo.com>, linux-pci@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: [bugzilla-daemon@...nel.org: [Bug 216644] New: Host OS hangs when
 enabling VMD in UEFI setup]

Thanks, Adrian, for the bisection and detailed debugging!

----- Forwarded message from bugzilla-daemon@...nel.org -----

https://bugzilla.kernel.org/show_bug.cgi?id=216644

           Summary: Host OS hangs when enabling VMD in UEFI setup
    Kernel Version: 6.1-rc2
        Regression: No

Created attachment 303108
  --> https://bugzilla.kernel.org/attachment.cgi?id=303108&action=edit
OS Log (Serial Log)

When enabling VMD in BIOS setup, the host OS cannot boot successfully with the
following error message:

[    8.986310] vmd 0000:64:05.5: PCI host bridge to bus 10000:00
...
[    9.674113] vmd 0000:64:05.5: Bound to PCI domain 10000
...
[   33.592638] DMAR: VT-d detected Invalidation Queue Error: Reason f
[   33.592640] DMAR: VT-d detected Invalidation Time-out Error: SID ffff
[   33.599853] DMAR: VT-d detected Invalidation Completion Error: SID ffff
[   33.607339] DMAR: QI HEAD: UNKNOWN qw0 = 0x0, qw1 = 0x0
[   33.621143] DMAR: QI PRIOR: UNKNOWN qw0 = 0x0, qw1 = 0x0
[   33.627366] DMAR: Invalidation Time-out Error (ITE) cleared


*** Hardware Info ***
Platform: skylake-D purley platform
VMD: 8086:201d
    # lspci -s 0000:64:05.5 -nn
    0000:64:05.5 RAID bus controller [0104]: Intel Corporation Volume
Management 
    Device NVMe RAID Controller [8086:201d] (rev 04)


*** Detail Info ***
`git bisect` points the following offending patch (commit: 6aab5622296b):

commit 6aab5622296b990024ee67dd7efa7d143e7558d0
Author: Nirmal Patel <nirmal.patel@...ux.intel.com>
Date:   Tue Nov 16 15:11:36 2021 -0700

    PCI: vmd: Clean up domain before enumeration

    During VT-d pass-through, the VMD driver occasionally fails to
    enumerate underlying NVMe devices when repetitive reboots are
    performed in the guest OS. The issue can be resolved by resetting
    VMD root ports for proper enumeration and triggering secondary bus
    reset which will also propagate reset through downstream bridges.

    Link:
https://lore.kernel.org/r/20211116221136.85134-1-nirmal.patel@linux.intel.com
    Signed-off-by: Nirmal Patel <nirmal.patel@...ux.intel.com>
    Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@....com>
    Reviewed-by: Jon Derrick <jonathan.derrick@...ux.dev>


*** Debugging Info ***
1. Reverting 6aab5622296b on top of 6.1-rc2 can fix the issue.

2. Comment out for calling vmd_domain_reset() can also fix the issue. So, it
looks like the function memset_io() causes the issue.

static void vmd_domain_reset(struct vmd_dev *vmd)
{
        ...
        for (bus = 0; bus < max_buses; bus++) {
                for (dev = 0; dev < 32; dev++) {
                                ...

                                memset_io(base + PCI_IO_BASE, 0,
                                          PCI_ROM_ADDRESS1 - PCI_IO_BASE);
                        }
                }
        }
}

3. pci_reset_bus() returns -25 because 'slot' or 'bus->self' is NULL. 

4. We have 4 disks attached to VMD:
# nvme list
Node                  Generic               SN                   Model         
                          Namespace Usage                      Format          
FW Rev
--------------------- --------------------- --------------------
---------------------------------------- --------- --------------------------
---------------- --------
/dev/nvme3n1          /dev/ng3n1            222639A46A39        
Micron_7450_MTFDKBA960TFR                1          11.48  GB / 960.20  GB   
512   B +  0 B   E2MU111
/dev/nvme2n1          /dev/ng2n1            222639A46A30        
Micron_7450_MTFDKBA960TFR                1           4.18  GB / 960.20  GB   
512   B +  0 B   E2MU111
/dev/nvme1n1          /dev/ng1n1            BTLJ849201CE1P0I     SSDPELKX010T8L
                          1           1.00  TB /   1.00  TB    512   B +  0 B  
VCV1LZ37
/dev/nvme0n1          /dev/ng0n1            BTLJ849201BS1P0I     SSDPELKX010T8L
                          1           1.00  TB /   1.00  TB    512   B +  0 B  
VCV1LZ37

Any thoughts? Thanks for the help.

----- End forwarded message -----

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ