[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZHoSlnSX0K4xeZOF@corigine.com>
Date: Fri, 2 Jun 2023 18:02:30 +0200
From: Simon Horman <simon.horman@...igine.com>
To: Saeed Mahameed <saeed@...nel.org>
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Saeed Mahameed <saeedm@...dia.com>, netdev@...r.kernel.org,
Tariq Toukan <tariqt@...dia.com>, Mark Bloch <mbloch@...dia.com>,
Shay Drory <shayd@...dia.com>, Roi Dayan <roid@...dia.com>
Subject: Re: [net-next 03/14] net/mlx5e: rep, store send to vport rules per
peer
On Wed, May 31, 2023 at 11:01:07PM -0700, Saeed Mahameed wrote:
> From: Mark Bloch <mbloch@...dia.com>
>
> Each representor, for each send queue, is holding a
> send_to_vport rule for the peer eswitch.
>
> In order to support more than one peer, and to map between the peer
> rules and peer eswitches, refactor representor to hold both the peer
> rules and pointer to the peer eswitches.
> This enables mlx5 to store send_to_vport rules per peer, where each
> peer have dedicate index via mlx5_get_dev_index().
>
> Signed-off-by: Mark Bloch <mbloch@...dia.com>
> Signed-off-by: Shay Drory <shayd@...dia.com>
> Reviewed-by: Roi Dayan <roid@...dia.com>
> Signed-off-by: Saeed Mahameed <saeedm@...dia.com>
...
> @@ -426,15 +437,24 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
> rep_sq->sqn = sqns_array[i];
>
> if (peer_esw) {
> + int peer_rule_idx = mlx5_get_dev_index(peer_esw->dev);
> +
> + sq_peer = kzalloc(sizeof(*sq_peer), GFP_KERNEL);
> + if (!sq_peer)
> + goto out_sq_peer_err;
Hi Mark and Saeed,
Jumping to out_sq_peer_err will return err.
But err seems to be uninitialised here.
> +
> flow_rule = mlx5_eswitch_add_send_to_vport_rule(peer_esw, esw,
> rep, sqns_array[i]);
> if (IS_ERR(flow_rule)) {
> err = PTR_ERR(flow_rule);
> - mlx5_eswitch_del_send_to_vport_rule(rep_sq->send_to_vport_rule);
> - kfree(rep_sq);
> - goto out_err;
> + goto out_flow_rule_err;
> }
> - rep_sq->send_to_vport_rule_peer = flow_rule;
> +
> + sq_peer->rule = flow_rule;
> + sq_peer->peer = peer_esw;
> + err = xa_insert(&rep_sq->sq_peer, peer_rule_idx, sq_peer, GFP_KERNEL);
> + if (err)
> + goto out_xa_err;
> }
>
> list_add(&rep_sq->list, &rpriv->vport_sqs_list);
> @@ -445,6 +465,14 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
>
> return 0;
>
> +out_xa_err:
> + mlx5_eswitch_del_send_to_vport_rule(flow_rule);
> +out_flow_rule_err:
> + kfree(sq_peer);
> +out_sq_peer_err:
> + mlx5_eswitch_del_send_to_vport_rule(rep_sq->send_to_vport_rule);
> + xa_destroy(&rep_sq->sq_peer);
> + kfree(rep_sq);
> out_err:
> mlx5e_sqs2vport_stop(esw, rep);
>
...
Powered by blists - more mailing lists