[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM0PR0502MB3683C922A7D87D3E1F64B93EBF7B0@AM0PR0502MB3683.eurprd05.prod.outlook.com>
Date: Tue, 26 Sep 2017 06:43:41 +0000
From: Yuval Mintz <yuvalm@...lanox.com>
To: Yunsheng Lin <linyunsheng@...wei.com>
CC: "huangdaode@...ilicon.com" <huangdaode@...ilicon.com>,
"xuwei5@...ilicon.com" <xuwei5@...ilicon.com>,
"liguozhu@...ilicon.com" <liguozhu@...ilicon.com>,
"Yisen.Zhuang@...wei.com" <Yisen.Zhuang@...wei.com>,
"gabriele.paoloni@...wei.com" <gabriele.paoloni@...wei.com>,
"john.garry@...wei.com" <john.garry@...wei.com>,
"linuxarm@...wei.com" <linuxarm@...wei.com>,
"yisen.zhuang@...wei.com" <yisen.zhuang@...wei.com>,
"salil.mehta@...wei.com" <salil.mehta@...wei.com>,
"lipeng321@...wei.com" <lipeng321@...wei.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: RE: [PATCH v2 net-next 10/10] net: hns3: Add mqprio support when
interacting with network stack
> When using tc qdisc to configure DCB parameter, dcb_ops->setup_tc
> is used to tell hclge_dcb module to do the setup.
While this might be a step in the right direction, this causes an inconsistency
in user experience - Some [well, most] vendors didn't allow the mqprio
priority mapping to affect DCB, instead relying on the dcbnl functionality
to control that configuration.
A couple of options to consider:
- Perhaps said logic shouldn't be contained inside the driver but rather
in mqprio logic itself. I.e., rely on DCBNL functionality [if available] from
within mqprio and try changing the configuration.
- Add a new TC_MQPRIO_HW_OFFLOAD_ value to explicitly reflect user
request to allow this configuration to affect DCB.
> When using lldptool to configure DCB parameter, hclge_dcb module
> call the client_ops->setup_tc to tell network stack which queue
> and priority is using for specific tc.
You're basically bypassing the mqprio logic.
Since you're configuring the prio->queue mapping from DCB flow,
you'll get an mqprio-like behavior [meaning a transmitted packet
would reach a transmission queue associated with its priority] even
if device wasn't grated with an mqprio qdisc.
Why should your user even use mqprio? What benefit does he get from it?
...
> +static int hns3_nic_set_real_num_queue(struct net_device *netdev)
> +{
> + struct hns3_nic_priv *priv = netdev_priv(netdev);
> + struct hnae3_handle *h = priv->ae_handle;
> + struct hnae3_knic_private_info *kinfo = &h->kinfo;
> + unsigned int queue_size = kinfo->rss_size * kinfo->num_tc;
> + int ret;
> +
> + ret = netif_set_real_num_tx_queues(netdev, queue_size);
> + if (ret) {
> + netdev_err(netdev,
> + "netif_set_real_num_tx_queues fail, ret=%d!\n",
> + ret);
> + return ret;
> + }
> +
> + ret = netif_set_real_num_rx_queues(netdev, queue_size);
I don't think you're changing the driver behavior, but why are you setting
the real number of rx queues based on the number of TCs?
Do you actually open (TC x RSS) Rx queues?
Powered by blists - more mailing lists