[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240429124922.2872002-1-ross.lagerwall@citrix.com>
Date: Mon, 29 Apr 2024 13:49:22 +0100
From: Ross Lagerwall <ross.lagerwall@...rix.com>
To: netdev@...r.kernel.org
Cc: intel-wired-lan@...ts.osuosl.org,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Javi Merino <javi.merino@...nel.org>,
Ross Lagerwall <ross.lagerwall@...rix.com>
Subject: [PATCH v2] ice: Fix enabling SR-IOV with Xen
When the PCI functions are created, Xen is informed about them and
caches the number of MSI-X entries each function has. However, the
number of MSI-X entries is not set until after the hardware has been
configured and the VFs have been started. This prevents
PCI-passthrough from working because Xen rejects mapping MSI-X
interrupts to domains because it thinks the MSI-X interrupts don't
exist.
Fix this by moving the call to pci_enable_sriov() later so that the
number of MSI-X entries is set correctly in hardware by the time Xen
reads it.
Signed-off-by: Ross Lagerwall <ross.lagerwall@...rix.com>
Signed-off-by: Javi Merino <javi.merino@...nel.org>
---
In v2:
* Fix cleanup on if pci_enable_sriov() fails.
drivers/net/ethernet/intel/ice/ice_sriov.c | 23 +++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index a958fcf3e6be..bc97493046a8 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -864,6 +864,8 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
int total_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;
struct device *dev = ice_pf_to_dev(pf);
struct ice_hw *hw = &pf->hw;
+ struct ice_vf *vf;
+ unsigned int bkt;
int ret;
pf->sriov_irq_bm = bitmap_zalloc(total_vectors, GFP_KERNEL);
@@ -877,24 +879,20 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
set_bit(ICE_OICR_INTR_DIS, pf->state);
ice_flush(hw);
- ret = pci_enable_sriov(pf->pdev, num_vfs);
- if (ret)
- goto err_unroll_intr;
-
mutex_lock(&pf->vfs.table_lock);
ret = ice_set_per_vf_res(pf, num_vfs);
if (ret) {
dev_err(dev, "Not enough resources for %d VFs, err %d. Try with fewer number of VFs\n",
num_vfs, ret);
- goto err_unroll_sriov;
+ goto err_unroll_intr;
}
ret = ice_create_vf_entries(pf, num_vfs);
if (ret) {
dev_err(dev, "Failed to allocate VF entries for %d VFs\n",
num_vfs);
- goto err_unroll_sriov;
+ goto err_unroll_intr;
}
ice_eswitch_reserve_cp_queues(pf, num_vfs);
@@ -905,6 +903,10 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
goto err_unroll_vf_entries;
}
+ ret = pci_enable_sriov(pf->pdev, num_vfs);
+ if (ret)
+ goto err_unroll_start_vfs;
+
clear_bit(ICE_VF_DIS, pf->state);
/* rearm global interrupts */
@@ -915,12 +917,15 @@ static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
return 0;
+err_unroll_start_vfs:
+ ice_for_each_vf(pf, bkt, vf) {
+ ice_dis_vf_mappings(vf);
+ ice_vf_vsi_release(vf);
+ }
err_unroll_vf_entries:
ice_free_vf_entries(pf);
-err_unroll_sriov:
- mutex_unlock(&pf->vfs.table_lock);
- pci_disable_sriov(pf->pdev);
err_unroll_intr:
+ mutex_unlock(&pf->vfs.table_lock);
/* rearm interrupts here */
ice_irq_dynamic_ena(hw, NULL, NULL);
clear_bit(ICE_OICR_INTR_DIS, pf->state);
--
2.43.0
Powered by blists - more mailing lists