lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db1d3c1d-de04-401e-a03e-a8bc8cce639e@163.com>
Date: Thu, 11 Jul 2024 09:32:46 +0800
From: Jiwei Sun <sjiwei@....com>
To: Bjorn Helgaas <helgaas@...nel.org>
Cc: nirmal.patel@...ux.intel.com, jonathan.derrick@...ux.dev,
 paul.m.stillwell.jr@...el.com, lpieralisi@...nel.org, kw@...ux.com,
 robh@...nel.org, bhelgaas@...gle.com, linux-pci@...r.kernel.org,
 linux-kernel@...r.kernel.org, sunjw10@...ovo.com, ahuang12@...ovo.com
Subject: Re: [PATCH v3] PCI: vmd: Create domain symlink before
 pci_bus_add_devices()


On 7/11/24 06:16, Bjorn Helgaas wrote:
> [-cc Pawel, Alexey, Tomasz, which all bounced]
> 
> On Wed, Jul 10, 2024 at 09:29:25PM +0800, Jiwei Sun wrote:
>> On 7/10/24 04:59, Bjorn Helgaas wrote:
>>> [+cc Pawel, Alexey, Tomasz for mdadm history]
>>> On Wed, Jun 05, 2024 at 08:48:44PM +0800, Jiwei Sun wrote:
>>>> From: Jiwei Sun <sunjw10@...ovo.com>
>>>>
>>>> During booting into the kernel, the following error message appears:
>>>>
>>>>   (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: Unable to get real path for '/sys/bus/pci/drivers/vmd/0000:c7:00.5/domain/device''
>>>>   (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: /dev/nvme1n1 is not attached to Intel(R) RAID controller.'
>>>>   (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: No OROM/EFI properties for /dev/nvme1n1'
>>>>   (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: no RAID superblock on /dev/nvme1n1.'
>>>>   (udev-worker)[2149]: nvme1n1: Process '/sbin/mdadm -I /dev/nvme1n1' failed with exit code 1.
>>>>
>>>> This symptom prevents the OS from booting successfully.
>>>
>>> I guess the root filesystem must be on a RAID device, and it's the
>>> failure to assemble that RAID device that prevents OS boot?  The
>>> messages are just details about why the assembly failed?
>>
>> Yes, you are right, in our test environment, we installed the SLES15SP6
>> on a VROC RAID 1 device which is set up by two NVME hard drivers. And
>> there is also a hardware RAID kit on the motherboard with other two NVME 
>> hard drivers.
> 
> OK, thanks for all the details.  What would you think of updating the
> commit log like this?

Thanks, I think this commit log is clearer than before. Do I need to 
send another v4 patch for the changes?

Thanks,
Regards,
Jiwei

> 
>   The vmd driver creates a "domain" symlink in sysfs for each VMD bridge.
>   Previously this symlink was created after pci_bus_add_devices() added
>   devices below the VMD bridge and emitted udev events to announce them to
>   userspace.
> 
>   This led to a race between userspace consumers of the udev events and the
>   kernel creation of the symlink.  One such consumer is mdadm, which
>   assembles block devices into a RAID array, and for devices below a VMD
>   bridge, mdadm depends on the "domain" symlink.
> 
>   If mdadm loses the race, it may be unable to assemble a RAID array, which
>   may cause a boot failure or other issues, with complaints like this:
> 
>   ...
> 
>   Create the VMD "domain" symlink before invoking pci_bus_add_devices() to
>   avoid this race.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ