[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240710221659.GA262309@bhelgaas>
Date: Wed, 10 Jul 2024 17:16:59 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Jiwei Sun <sjiwei@....com>
Cc: nirmal.patel@...ux.intel.com, jonathan.derrick@...ux.dev,
paul.m.stillwell.jr@...el.com, lpieralisi@...nel.org, kw@...ux.com,
robh@...nel.org, bhelgaas@...gle.com, linux-pci@...r.kernel.org,
linux-kernel@...r.kernel.org, sunjw10@...ovo.com,
ahuang12@...ovo.com
Subject: Re: [PATCH v3] PCI: vmd: Create domain symlink before
pci_bus_add_devices()
[-cc Pawel, Alexey, Tomasz, which all bounced]
On Wed, Jul 10, 2024 at 09:29:25PM +0800, Jiwei Sun wrote:
> On 7/10/24 04:59, Bjorn Helgaas wrote:
> > [+cc Pawel, Alexey, Tomasz for mdadm history]
> > On Wed, Jun 05, 2024 at 08:48:44PM +0800, Jiwei Sun wrote:
> >> From: Jiwei Sun <sunjw10@...ovo.com>
> >>
> >> During booting into the kernel, the following error message appears:
> >>
> >> (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: Unable to get real path for '/sys/bus/pci/drivers/vmd/0000:c7:00.5/domain/device''
> >> (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: /dev/nvme1n1 is not attached to Intel(R) RAID controller.'
> >> (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: No OROM/EFI properties for /dev/nvme1n1'
> >> (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: no RAID superblock on /dev/nvme1n1.'
> >> (udev-worker)[2149]: nvme1n1: Process '/sbin/mdadm -I /dev/nvme1n1' failed with exit code 1.
> >>
> >> This symptom prevents the OS from booting successfully.
> >
> > I guess the root filesystem must be on a RAID device, and it's the
> > failure to assemble that RAID device that prevents OS boot? The
> > messages are just details about why the assembly failed?
>
> Yes, you are right, in our test environment, we installed the SLES15SP6
> on a VROC RAID 1 device which is set up by two NVME hard drivers. And
> there is also a hardware RAID kit on the motherboard with other two NVME
> hard drivers.
OK, thanks for all the details. What would you think of updating the
commit log like this?
The vmd driver creates a "domain" symlink in sysfs for each VMD bridge.
Previously this symlink was created after pci_bus_add_devices() added
devices below the VMD bridge and emitted udev events to announce them to
userspace.
This led to a race between userspace consumers of the udev events and the
kernel creation of the symlink. One such consumer is mdadm, which
assembles block devices into a RAID array, and for devices below a VMD
bridge, mdadm depends on the "domain" symlink.
If mdadm loses the race, it may be unable to assemble a RAID array, which
may cause a boot failure or other issues, with complaints like this:
...
Create the VMD "domain" symlink before invoking pci_bus_add_devices() to
avoid this race.
Powered by blists - more mailing lists