[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230928-mlx5_init_fix-v1-1-79749d45ce60@linux.ibm.com>
Date: Thu, 28 Sep 2023 15:55:47 +0200
From: Niklas Schnelle <schnelle@...ux.ibm.com>
To: Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>,
Jason Gunthorpe <jgg@...pe.ca>,
Matthew Rosato <mjrosato@...ux.ibm.com>,
Joerg Roedel <joro@...tes.org>,
Robin Murphy <robin.murphy@....com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Shay Drory <shayd@...dia.com>,
Moshe Shemesh <moshe@...dia.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>
Cc: linux-s390@...r.kernel.org, netdev@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
Niklas Schnelle <schnelle@...ux.ibm.com>
Subject: [PATCH net] net/mlx5: fix calling mlx5_cmd_init() before DMA mask
is set
Since commit 06cd555f73ca ("net/mlx5: split mlx5_cmd_init() to probe and
reload routines") mlx5_cmd_init() is called in mlx5_mdev_init() which is
called in probe_one() before mlx5_pci_init(). This is a problem because
mlx5_pci_init() is where the DMA and coherent mask is set but
mlx5_cmd_init() already does a dma_alloc_coherent(). Thus a DMA
allocation is done during probe before the correct mask is set. This
causes probe to fail initialization of the cmdif SW structs on s390x
after that is converted to the common dma-iommu code. This is because on
s390x DMA addresses below 4 GiB are reserved on current machines and
unlike the old s390x specific DMA API implementation common code
enforces DMA masks. Fix this by switching the order of the
mlx5_mdev_init() and mlx5_pci_init() in probe_one().
Link: https://lore.kernel.org/linux-iommu/cfc9e9128ed5571d2e36421e347301057662a09e.camel@linux.ibm.com/
Fixes: 06cd555f73ca ("net/mlx5: split mlx5_cmd_init() to probe and reload routines")
Signed-off-by: Niklas Schnelle <schnelle@...ux.ibm.com>
---
Note: I ran into this while testing the linked series for converting
s390x to use dma-iommu. The existing s390x specific DMA API
implementation doesn't respect DMA masks and is thus not affected
despite of course also only supporting DMA addresses above 4 GiB.
That said ConnectX VFs are the primary users of native PCI on s390x and
we'd really like to get the DMA API conversion into v6.7 so this has
high priority for us.
---
drivers/net/ethernet/mellanox/mlx5/core/main.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 15561965d2af..06744dedd928 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -1908,10 +1908,6 @@ static int probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
goto adev_init_err;
}
- err = mlx5_mdev_init(dev, prof_sel);
- if (err)
- goto mdev_init_err;
-
err = mlx5_pci_init(dev, pdev, id);
if (err) {
mlx5_core_err(dev, "mlx5_pci_init failed with error code %d\n",
@@ -1919,6 +1915,10 @@ static int probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
goto pci_init_err;
}
+ err = mlx5_mdev_init(dev, prof_sel);
+ if (err)
+ goto mdev_init_err;
+
err = mlx5_init_one(dev);
if (err) {
mlx5_core_err(dev, "mlx5_init_one failed with error code %d\n",
@@ -1939,10 +1939,10 @@ static int probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
return 0;
err_init_one:
- mlx5_pci_close(dev);
-pci_init_err:
mlx5_mdev_uninit(dev);
mdev_init_err:
+ mlx5_pci_close(dev);
+pci_init_err:
mlx5_adev_idx_free(dev->priv.adev_idx);
adev_init_err:
mlx5_devlink_free(devlink);
---
base-commit: 6465e260f48790807eef06b583b38ca9789b6072
change-id: 20230928-mlx5_init_fix-c465b5cda327
Best regards,
--
Niklas Schnelle
Linux on Z Development
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Gregor Pillen
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294
IBM Data Privacy Statement - https://www.ibm.com/privacy
Powered by blists - more mailing lists