[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y81oeTZiSTOCXsoK@corigine.com>
Date: Sun, 22 Jan 2023 17:46:49 +0100
From: Simon Horman <simon.horman@...igine.com>
To: Hariprasad Kelam <hkelam@...vell.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
kuba@...nel.org, davem@...emloft.net, pabeni@...hat.com,
edumazet@...gle.com, sgoutham@...vell.com, lcherian@...vell.com,
gakula@...vell.com, jerinj@...vell.com, sbhatta@...vell.com,
jhs@...atatu.com, xiyou.wangcong@...il.com, jiri@...nulli.us,
saeedm@...dia.com, richardcochran@...il.com, tariqt@...dia.com,
linux-rdma@...r.kernel.org, maxtram95@...il.com
Subject: Re: [net-next Patch v2 2/5] octeontx2-pf: qos send queues management
On Wed, Jan 18, 2023 at 04:21:04PM +0530, Hariprasad Kelam wrote:
> From: Subbaraya Sundeep <sbhatta@...vell.com>
>
> Current implementation is such that the number of Send queues (SQs)
> are decided on the device probe which is equal to the number of online
> cpus. These SQs are allocated and deallocated in interface open and c
> lose calls respectively.
>
> This patch defines new APIs for initializing and deinitializing Send
> queues dynamically and allocates more number of transmit queues for
> QOS feature.
>
> Signed-off-by: Subbaraya Sundeep <sbhatta@...vell.com>
> Signed-off-by: Hariprasad Kelam <hkelam@...vell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@...vell.com>
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index 88f8772a61cd..0868ae825736 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -758,11 +758,16 @@ int otx2_txschq_stop(struct otx2_nic *pfvf)
> void otx2_sqb_flush(struct otx2_nic *pfvf)
> {
> int qidx, sqe_tail, sqe_head;
> + struct otx2_snd_queue *sq;
> u64 incr, *ptr, val;
> int timeout = 1000;
>
> ptr = (u64 *)otx2_get_regaddr(pfvf, NIX_LF_SQ_OP_STATUS);
> - for (qidx = 0; qidx < pfvf->hw.tot_tx_queues; qidx++) {
> + for (qidx = 0; qidx < pfvf->hw.tot_tx_queues + pfvf->hw.tc_tx_queues;
nit:
It seems awkward that essentially this is saying that the
total tx queues is 'tot_tx_queues' + 'tc_tx_queues'.
As I read 'tot' as being short for 'total'.
Also, the pfvf->hw.tot_tx_queues + pfvf->hw.tc_tx_queues pattern
is rather verbose and repeated often. Perhaps a helper would... help.
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> index c1ea60bc2630..3acda6d289d3 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
...
> @@ -1688,11 +1693,13 @@ int otx2_open(struct net_device *netdev)
>
> netif_carrier_off(netdev);
>
> - pf->qset.cq_cnt = pf->hw.rx_queues + pf->hw.tot_tx_queues;
> /* RQ and SQs are mapped to different CQs,
> * so find out max CQ IRQs (i.e CINTs) needed.
> */
> pf->hw.cint_cnt = max(pf->hw.rx_queues, pf->hw.tx_queues);
> + pf->hw.cint_cnt = max_t(u8, pf->hw.cint_cnt, pf->hw.tc_tx_queues);
nit: maybe this is nicer? *completely untested!*
pf->hw.cint_cnt = max3(pf->hw.rx_queues, pf->hw.tx_queues),
pf->hw.tc_tx_queues);
...
> @@ -735,7 +741,10 @@ static void otx2_sqe_add_hdr(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
> sqe_hdr->aura = sq->aura_id;
> /* Post a CQE Tx after pkt transmission */
> sqe_hdr->pnc = 1;
> - sqe_hdr->sq = qidx;
> + if (pfvf->hw.tx_queues == qidx)
> + sqe_hdr->sq = qidx + pfvf->hw.xdp_queues;
> + else
> + sqe_hdr->sq = qidx;
nit: maybe this is nicer? *completely untested!*
sqe_hdr = pfvf->hw.tx_queues != qidx ?
qidx + pfvf->hw.xdp_queues : qidx;
...
Powered by blists - more mailing lists