lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230328045122.25850-3-decui@microsoft.com>
Date:   Mon, 27 Mar 2023 21:51:18 -0700
From:   Dexuan Cui <decui@...rosoft.com>
To:     bhelgaas@...gle.com, davem@...emloft.net, decui@...rosoft.com,
        edumazet@...gle.com, haiyangz@...rosoft.com, jakeo@...rosoft.com,
        kuba@...nel.org, kw@...ux.com, kys@...rosoft.com, leon@...nel.org,
        linux-pci@...r.kernel.org, lpieralisi@...nel.org,
        mikelley@...rosoft.com, pabeni@...hat.com, robh@...nel.org,
        saeedm@...dia.com, wei.liu@...nel.org, longli@...rosoft.com,
        boqun.feng@...il.com
Cc:     linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-rdma@...r.kernel.org, netdev@...r.kernel.org
Subject: [PATCH 2/6] PCI: hv: Fix a race condition in hv_irq_unmask() that can cause panic

When the host tries to remove a PCI device, the host first sends a
PCI_EJECT message to the guest, and the guest is supposed to gracefully
remove the PCI device and send a PCI_EJECTION_COMPLETE message to the host;
the host then sends a VMBus message CHANNELMSG_RESCIND_CHANNELOFFER to
the guest (when the guest receives this message, the device is already
unassigned from the guest) and the guest can do some final cleanup work;
if the guest fails to respond to the PCI_EJECT message within one minute,
the host sends the VMBus message CHANNELMSG_RESCIND_CHANNELOFFER and
removes the PCI device forcibly.

In the case of fast device addition/removal, it's possible that the PCI
device driver is still configuring MSI-X interrupts when the guest receives
the PCI_EJECT message; the channel callback calls hv_pci_eject_device(),
which sets hpdev->state to hv_pcichild_ejecting, and schedules a work
hv_eject_device_work(); if the PCI device driver is calling
pci_alloc_irq_vectors() -> ... -> hv_compose_msi_msg(), we can break the
while loop in hv_compose_msi_msg() due to the updated hpdev->state, and
leave data->chip_data with its default value of NULL; later, when the PCI
device driver calls request_irq() -> ... -> hv_irq_unmask(), the guest
crashes in hv_arch_irq_unmask() due to data->chip_data being NULL.

Fix the issue by not testing hpdev->state in the while loop: when the
guest receives PCI_EJECT, the device is still assigned to the guest, and
the guest has one minute to finish the device removal gracefully. We don't
really need to (and we should not) test hpdev->state in the loop.

Fixes: de0aa7b2f97d ("PCI: hv: Fix 2 hang issues in hv_compose_msi_msg()")
Signed-off-by: Dexuan Cui <decui@...rosoft.com>

---
 drivers/pci/controller/pci-hyperv.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

With the below debug code:

@@ -643,6 +643,9 @@ static void hv_arch_irq_unmask(struct irq_data *data)
 	pbus = pdev->bus;
 	hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
 	int_desc = data->chip_data;
+	if (!int_desc)
+		dev_warn(&hbus->hdev->device, "%s() can not unmask irq %u\n",
+			 __func__, data->irq);

 	spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);

@@ -1865,6 +1868,11 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
 		goto free_int_desc;
 	}

+	printk("%s: line %d: irq=%u\n", __func__, __LINE__, data->irq);
+	{
+		static bool delayed; //remove the device within the 10s.
+		if (!delayed) { delayed = true; mdelay(10000); }
+	}
 	ret = vmbus_sendpacket_getid(hpdev->hbus->hdev->channel, &ctxt.int_pkts,
 				     size, (unsigned long)&ctxt.pci_pkt,
 				     &trans_id, VM_PKT_DATA_INBAND,

I'm able to repro the below panic:

[   23.258674] hv_pci b92a0085-468b-407a-a88a-d33fac8edc75: PCI VMBus probing: Using version 0x10004
[   23.271313] hv_pci b92a0085-468b-407a-a88a-d33fac8edc75: PCI host bridge to bus 468b:00
[   23.274554] pci_bus 468b:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
[   23.277733] pci_bus 468b:00: No busn resource found for root bus, will use [bus 00-ff]
[   23.283845] pci 468b:00:02.0: [15b3:1016] type 00 class 0x020000
[   23.289796] pci 468b:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
[   23.296463] pci 468b:00:02.0: enabling Extended Tags
...
[   23.331300] pci_bus 468b:00: busn_res: [bus 00-ff] end is updated to 00
[   23.334130] pci 468b:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
[   23.507985] mlx5_core 468b:00:02.0: no default pinctrl state
[   23.510834] mlx5_core 468b:00:02.0: enabling device (0000 -> 0002)
[   23.516843] mlx5_core 468b:00:02.0: firmware version: 14.25.8102
[   23.745069] hv_compose_msi_msg: line 1871: irq=24
[   33.685554] hv_pci b92a0085-468b-407a-a88a-d33fac8edc75: the device is being ejected
[   33.690855] hv_compose_msi_msg: line 1871: irq=25
[   33.694797] hv_compose_msi_msg: line 1871: irq=26
[   33.698884] hv_compose_msi_msg: line 1871: irq=27
[   33.702910] hv_compose_msi_msg: line 1871: irq=28
[   33.705726] hv_compose_msi_msg: line 1871: irq=29
[   33.709644] hv_compose_msi_msg: line 1871: irq=29
[   33.712182] hv_pci b92a0085-468b-407a-a88a-d33fac8edc75: hv_arch_irq_unmask() can not unmask irq 29
[   33.716625] BUG: kernel NULL pointer dereference, address: 0000000000000008
...
[   33.737426] Workqueue: events work_for_cpu_fn
[   33.739562] RIP: 0010:hv_irq_unmask+0xc2/0x400 [pci_hyperv]
...
[   33.778511] Call Trace:
[   33.779533]  <TASK>
[   33.780428]  unmask_irq.part.0+0x23/0x40
[   33.781994]  irq_enable+0x60/0x70
[   33.783336]  __irq_startup+0x5b/0x80
[   33.784772]  irq_startup+0x75/0x140
[   33.786175]  __setup_irq+0x3ae/0x840
[   33.787586]  request_threaded_irq+0x112/0x180
[   33.789298]  mlx5_irq_alloc+0x111/0x310 [mlx5_core]
[   33.791464]  irq_pool_request_vector+0x72/0x80 [mlx5_core]
[   33.794449]  mlx5_ctrl_irq_request+0xc9/0x160 [mlx5_core]
[   33.797454]  mlx5_eq_table_create+0x9e/0xb30 [mlx5_core]
[   33.802127]  mlx5_load+0x54/0x3b0 [mlx5_core]
[   33.804157]  mlx5_init_one+0x1e6/0x550 [mlx5_core]
[   33.806347]  probe_one+0x2e5/0x460 [mlx5_core]
[   33.808664]  local_pci_probe+0x4b/0xb0
[   33.810377]  work_for_cpu_fn+0x1a/0x30
[   33.812275]  process_one_work+0x21f/0x430
[   33.814700]  worker_thread+0x1fa/0x3c0

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index b82c7cde19e6..1b11cf739193 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -643,6 +643,11 @@ static void hv_arch_irq_unmask(struct irq_data *data)
 	pbus = pdev->bus;
 	hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
 	int_desc = data->chip_data;
+	if (!int_desc) {
+		dev_warn(&hbus->hdev->device, "%s() can not unmask irq %u\n",
+			 __func__, data->irq);
+		return;
+	}
 
 	spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
 
@@ -1911,12 +1916,6 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
 		hv_pci_onchannelcallback(hbus);
 		spin_unlock_irqrestore(&channel->sched_lock, flags);
 
-		if (hpdev->state == hv_pcichild_ejecting) {
-			dev_err_once(&hbus->hdev->device,
-				     "the device is being ejected\n");
-			goto enable_tasklet;
-		}
-
 		udelay(100);
 	}
 
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ