[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251029-support-other-eswitch-v1-7-98bb707b5d57@nvidia.com>
Date: Wed, 29 Oct 2025 17:42:59 +0200
From: Edward Srouji <edwards@...dia.com>
To: Leon Romanovsky <leon@...nel.org>, Saeed Mahameed <saeedm@...dia.com>,
Tariq Toukan <tariqt@...dia.com>, Mark Bloch <mbloch@...dia.com>, Andrew Lunn
<andrew+netdev@...n.ch>, "David S . Miller" <davem@...emloft.net>, "Eric
Dumazet" <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
<pabeni@...hat.com>, Jason Gunthorpe <jgg@...pe.ca>
CC: <netdev@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, Patrisious Haddad <phaddad@...dia.com>, "Leon
Romanovsky" <leonro@...dia.com>, Edward Srouji <edwards@...dia.com>
Subject: [PATCH rdma-next 7/7] RDMA/mlx5: Add other eswitch support to userspace tables
From: Patrisious Haddad <phaddad@...dia.com>
Allows the creation of RDMA TRANSPORT tables over VFs/SFs that
belong to another eswitch manager. Which is only possible for PFs that
were connected via a create_lag PRM command.
Signed-off-by: Patrisious Haddad <phaddad@...dia.com>
Signed-off-by: Leon Romanovsky <leonro@...dia.com>
Signed-off-by: Edward Srouji <edwards@...dia.com>
---
drivers/infiniband/hw/mlx5/fs.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
index c8a25370aa79..d17823ce7f38 100644
--- a/drivers/infiniband/hw/mlx5/fs.c
+++ b/drivers/infiniband/hw/mlx5/fs.c
@@ -1874,7 +1874,7 @@ static int mlx5_ib_fill_transport_ns_info(struct mlx5_ib_dev *dev,
u32 *flags, u16 *vport_idx,
u16 *vport,
struct mlx5_core_dev **ft_mdev,
- u32 ib_port)
+ u32 ib_port, u16 *esw_owner_vhca_id)
{
struct mlx5_core_dev *esw_mdev;
@@ -1888,8 +1888,13 @@ static int mlx5_ib_fill_transport_ns_info(struct mlx5_ib_dev *dev,
return -EINVAL;
esw_mdev = mlx5_eswitch_get_core_dev(dev->port[ib_port - 1].rep->esw);
- if (esw_mdev != dev->mdev)
- return -EOPNOTSUPP;
+ if (esw_mdev != dev->mdev) {
+ if (!MLX5_CAP_ADV_RDMA(dev->mdev,
+ rdma_transport_manager_other_eswitch))
+ return -EOPNOTSUPP;
+ *flags |= MLX5_FLOW_TABLE_OTHER_ESWITCH;
+ *esw_owner_vhca_id = MLX5_CAP_GEN(esw_mdev, vhca_id);
+ }
*flags |= MLX5_FLOW_TABLE_OTHER_VPORT;
*ft_mdev = esw_mdev;
@@ -1908,6 +1913,7 @@ _get_flow_table(struct mlx5_ib_dev *dev, u16 user_priority,
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_flow_namespace *ns = NULL;
struct mlx5_ib_flow_prio *prio = NULL;
+ u16 esw_owner_vhca_id = 0;
int max_table_size = 0;
u16 vport_idx = 0;
bool esw_encap;
@@ -1969,7 +1975,8 @@ _get_flow_table(struct mlx5_ib_dev *dev, u16 user_priority,
return ERR_PTR(-EINVAL);
ret = mlx5_ib_fill_transport_ns_info(dev, ns_type, &flags,
&vport_idx, &vport,
- &ft_mdev, ib_port);
+ &ft_mdev, ib_port,
+ &esw_owner_vhca_id);
if (ret)
return ERR_PTR(ret);
@@ -2033,6 +2040,7 @@ _get_flow_table(struct mlx5_ib_dev *dev, u16 user_priority,
ft_attr.max_fte = max_table_size;
ft_attr.flags = flags;
ft_attr.vport = vport;
+ ft_attr.esw_owner_vhca_id = esw_owner_vhca_id;
ft_attr.autogroup.max_num_groups = MLX5_FS_MAX_TYPES;
return _get_prio(ns, prio, &ft_attr);
}
--
2.47.1
Powered by blists - more mailing lists