[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240301182810.9808-1-javi.merino@kernel.org>
Date: Fri, 1 Mar 2024 18:28:10 +0000
From: javi.merino@...nel.org
To: netdev@...r.kernel.org
Cc: intel-wired-lan@...ts.osuosl.org,
Ross Lagerwall <ross.lagerwall@...rix.com>,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Javi Merino <javi.merino@...nel.org>
Subject: [PATCH] ice: Fix enabling SR-IOV with Xen
From: Ross Lagerwall <ross.lagerwall@...rix.com>
When the PCI functions are created, Xen is informed about them and
caches the number of MSI-X entries each function has. However, the
number of MSI-X entries is not set until after the hardware has been
configured and the VFs have been started. This prevents
PCI-passthrough from working because Xen rejects mapping MSI-X
interrupts to domains because it thinks the MSI-X interrupts don't
exist.
Fix this by moving the call to pci_enable_sriov() later so that the
number of MSI-X entries is set correctly in hardware by the time Xen
reads it.
Cc: Jesse Brandeburg <jesse.brandeburg@...el.com>
Cc: Tony Nguyen <anthony.l.nguyen@...el.com>
Signed-off-by: Ross Lagerwall <ross.lagerwall@...rix.com>
Signed-off-by: Javi Merino <javi.merino@...nel.org>
---
I'm unsure about the error path if `pci_enable_sriov()` fails. Do we
have to undo what `ice_start_vfs()` has started? I can see that
`ice_start_vfs()` has a teardown at the end, so maybe we need the same
code if `pci_enable_sriov()` fails?
drivers/net/ethernet/intel/ice/ice_sriov.c | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index a94a1c48c3de..8a9c8a2fe834 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -878,24 +878,20 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
set_bit(ICE_OICR_INTR_DIS, pf->state);
ice_flush(hw);
- ret = pci_enable_sriov(pf->pdev, num_vfs);
- if (ret)
- goto err_unroll_intr;
-
mutex_lock(&pf->vfs.table_lock);
ret = ice_set_per_vf_res(pf, num_vfs);
if (ret) {
dev_err(dev, "Not enough resources for %d VFs, err %d. Try with fewer number of VFs\n",
num_vfs, ret);
- goto err_unroll_sriov;
+ goto err_unroll_intr;
}
ret = ice_create_vf_entries(pf, num_vfs);
if (ret) {
dev_err(dev, "Failed to allocate VF entries for %d VFs\n",
num_vfs);
- goto err_unroll_sriov;
+ goto err_unroll_intr;
}
ice_eswitch_reserve_cp_queues(pf, num_vfs);
@@ -906,6 +902,10 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
goto err_unroll_vf_entries;
}
+ ret = pci_enable_sriov(pf->pdev, num_vfs);
+ if (ret)
+ goto err_unroll_vf_entries;
+
clear_bit(ICE_VF_DIS, pf->state);
/* rearm global interrupts */
@@ -918,10 +918,8 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
err_unroll_vf_entries:
ice_free_vf_entries(pf);
-err_unroll_sriov:
- mutex_unlock(&pf->vfs.table_lock);
- pci_disable_sriov(pf->pdev);
err_unroll_intr:
+ mutex_unlock(&pf->vfs.table_lock);
/* rearm interrupts here */
ice_irq_dynamic_ena(hw, NULL, NULL);
clear_bit(ICE_OICR_INTR_DIS, pf->state);
--
2.43.1
Powered by blists - more mailing lists