[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240511144532.GJ2347895@kernel.org>
Date: Sat, 11 May 2024 15:45:32 +0100
From: Simon Horman <horms@...nel.org>
To: Ziwei Xiao <ziweixiao@...gle.com>
Cc: netdev@...r.kernel.org, jeroendb@...gle.com, pkaligineedi@...gle.com,
shailend@...gle.com, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com, willemb@...gle.com,
hramamurthy@...gle.com, rushilg@...gle.com, jfraker@...gle.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next 5/5] gve: Add flow steering ethtool support
On Tue, May 07, 2024 at 10:59:45PM +0000, Ziwei Xiao wrote:
> From: Jeroen de Borst <jeroendb@...gle.com>
>
> Implement the ethtool commands that can be used to configure and query
> flow-steering rules. For these ethtool commands, the driver will
> temporarily drop the rtnl lock to reduce the latency for the flow
> steering commands on separate NICs. It will then be protected by the new
> added adminq lock.
>
> A large part of this change consists of translating the ethtool
> representation of 'ntuples' to our internal gve_flow_rule and vice-versa
> in the new created gve_flow_rule.c
>
> Considering the possible large amount of flow rules, the driver doesn't
> store all the rules locally. When the user runs 'ethtool -n <nic>' to
> check the registered rules, the driver will send adminq command to
> query a limited amount of rules/rule ids(that filled in a 4096 bytes dma
> memory) at a time as a cache for the ethtool queries. The adminq query
> commands will be repeated for several times until the ethtool has
> queried all the needed rules.
>
> Signed-off-by: Jeroen de Borst <jeroendb@...gle.com>
> Co-developed-by: Ziwei Xiao <ziweixiao@...gle.com>
> Signed-off-by: Ziwei Xiao <ziweixiao@...gle.com>
> Reviewed-by: Praveen Kaligineedi <pkaligineedi@...gle.com>
> Reviewed-by: Harshitha Ramamurthy <hramamurthy@...gle.com>
> Reviewed-by: Willem de Bruijn <willemb@...gle.com>
..
> diff --git a/drivers/net/ethernet/google/gve/gve_flow_rule.c b/drivers/net/ethernet/google/gve/gve_flow_rule.c
> new file mode 100644
> index 000000000000..1cafd520f2db
> --- /dev/null
> +++ b/drivers/net/ethernet/google/gve/gve_flow_rule.c
> @@ -0,0 +1,296 @@
> +// SPDX-License-Identifier: (GPL-2.0 OR MIT)
> +/* Google virtual Ethernet (gve) driver
> + *
> + * Copyright (C) 2015-2024 Google LLC
> + */
> +
> +#include "gve.h"
> +#include "gve_adminq.h"
> +
> +static
> +int gve_fill_ethtool_flow_spec(struct ethtool_rx_flow_spec *fsp, struct gve_flow_rule *rule)
> +{
> + static const u16 flow_type_lut[] = {
> + [GVE_FLOW_TYPE_TCPV4] = TCP_V4_FLOW,
> + [GVE_FLOW_TYPE_UDPV4] = UDP_V4_FLOW,
> + [GVE_FLOW_TYPE_SCTPV4] = SCTP_V4_FLOW,
> + [GVE_FLOW_TYPE_AHV4] = AH_V4_FLOW,
> + [GVE_FLOW_TYPE_ESPV4] = ESP_V4_FLOW,
> + [GVE_FLOW_TYPE_TCPV6] = TCP_V6_FLOW,
> + [GVE_FLOW_TYPE_UDPV6] = UDP_V6_FLOW,
> + [GVE_FLOW_TYPE_SCTPV6] = SCTP_V6_FLOW,
> + [GVE_FLOW_TYPE_AHV6] = AH_V6_FLOW,
> + [GVE_FLOW_TYPE_ESPV6] = ESP_V6_FLOW,
> + };
> +
> + if (be16_to_cpu(rule->flow_type) >= ARRAY_SIZE(flow_type_lut))
The type of rule->flow_type is u16.
But be16_to_cpu expects a 16-bit big endian value as it's argument.
This does not seem right.
This was flagged by Sparse along with several other problems in this patch.
Please make sure patches don't introduce new Sparse warnings.
Thanks!
..
> +int gve_add_flow_rule(struct gve_priv *priv, struct ethtool_rxnfc *cmd)
> +{
> + struct ethtool_rx_flow_spec *fsp = &cmd->fs;
> + struct gve_adminq_flow_rule *rule = NULL;
> + int err;
> +
> + if (!priv->max_flow_rules)
> + return -EOPNOTSUPP;
> +
> + rule = kvzalloc(sizeof(*rule), GFP_KERNEL);
> + if (!rule)
> + return -ENOMEM;
> +
> + err = gve_generate_flow_rule(priv, fsp, rule);
> + if (err)
> + goto out;
> +
> + err = gve_adminq_add_flow_rule(priv, rule, fsp->location);
> +
> +out:
> + kfree(rule);
rule was allocated using kvmalloc(), so it should be freed using kvfree().
Flagged by Coccinelle.
> + if (err)
> + dev_err(&priv->pdev->dev, "Failed to add the flow rule: %u", fsp->location);
> +
> + return err;
> +}
..
Powered by blists - more mailing lists