[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <de3e14d355f42ed2322483bc1a3448ace46fd6fb.camel@perches.com>
Date: Thu, 25 Jun 2020 20:06:11 -0700
From: Joe Perches <joe@...ches.com>
To: Jeff Kirsher <jeffrey.t.kirsher@...el.com>, davem@...emloft.net
Cc: Alice Michael <alice.michael@...el.com>, netdev@...r.kernel.org,
nhorman@...hat.com, sassmann@...hat.com,
Alan Brady <alan.brady@...el.com>,
Phani Burra <phani.r.burra@...el.com>,
Joshua Hay <joshua.a.hay@...el.com>,
Madhu Chittim <madhu.chittim@...el.com>,
Pavan Kumar Linga <pavan.kumar.linga@...el.com>,
Donald Skidmore <donald.c.skidmore@...el.com>,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Sridhar Samudrala <sridhar.samudrala@...el.com>
Subject: Re: [net-next v3 07/15] iecm: Implement virtchnl commands
On Thu, 2020-06-25 at 19:07 -0700, Jeff Kirsher wrote:
> From: Alice Michael <alice.michael@...el.com>
>
> Implement various virtchnl commands that enable
> communication with hardware.
[]
> diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c
[]
> @@ -751,7 +1422,44 @@ iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q,
> enum iecm_status
> iecm_send_get_stats_msg(struct iecm_vport *vport)
> {
> - /* stub */
> + struct iecm_adapter *adapter = vport->adapter;
> + struct virtchnl_queue_select vqs;
> + enum iecm_status err;
> +
> + /* Don't send get_stats message if one is pending or the
> + * link is down
> + */
> + if (test_bit(IECM_VC_GET_STATS, adapter->vc_state) ||
> + adapter->state <= __IECM_DOWN)
> + return 0;
> +
> + vqs.vsi_id = vport->vport_id;
> +
> + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_STATS,
> + sizeof(vqs), (u8 *)&vqs);
rather clearer to just test and return err
if (err)
return err;
> +
> + if (!err)
> + err = iecm_wait_for_event(adapter, IECM_VC_GET_STATS,
> + IECM_VC_GET_STATS_ERR);
unindent and add
if (err)
return err;
so all the below is also unindented.
It might also be clearer to use another temporary
for vport->netstats
> +
> + if (!err) {
> + struct virtchnl_eth_stats *stats =
> + (struct virtchnl_eth_stats *)adapter->vc_msg;
> + vport->netstats.rx_packets = stats->rx_unicast +
> + stats->rx_multicast +
> + stats->rx_broadcast;
> + vport->netstats.tx_packets = stats->tx_unicast +
> + stats->tx_multicast +
> + stats->tx_broadcast;
> + vport->netstats.rx_bytes = stats->rx_bytes;
> + vport->netstats.tx_bytes = stats->tx_bytes;
> + vport->netstats.tx_errors = stats->tx_errors;
> + vport->netstats.rx_dropped = stats->rx_discards;
> + vport->netstats.tx_dropped = stats->tx_discards;
> + mutex_unlock(&adapter->vc_msg_lock);
> + }
> +
> + return err;
> }
[]
> @@ -801,7 +1670,24 @@ iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get)
> */
> enum iecm_status iecm_send_get_rx_ptype_msg(struct iecm_vport *vport)
> {
> - /* stub */
> + struct iecm_rx_ptype_decoded *rx_ptype_lkup = vport->rx_ptype_lkup;
> + int ptype_list[IECM_RX_SUPP_PTYPE] = { 0, 1, 11, 12, 22, 23, 24, 25, 26,
> + 27, 28, 88, 89, 90, 91, 92, 93,
> + 94 };
static const?
> + enum iecm_status err = 0;
> + int i;
> +
> + for (i = 0; i < IECM_RX_MAX_PTYPE; i++)
> + rx_ptype_lkup[i] = iecm_rx_ptype_lkup[0];
> +
> + for (i = 0; i < IECM_RX_SUPP_PTYPE; i++) {
> + int j = ptype_list[i];
> +
> + rx_ptype_lkup[j] = iecm_rx_ptype_lkup[i];
> + rx_ptype_lkup[j].ptype = ptype_list[i];
> + };
> +
> + return err;
> }
Powered by blists - more mailing lists