[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM5PR18MB21347E7CD5D39D016810C62DD6750@DM5PR18MB2134.namprd18.prod.outlook.com>
Date: Thu, 28 Feb 2019 15:40:50 +0000
From: Yan Markman <ymarkman@...vell.com>
To: Antoine Tenart <antoine.tenart@...tlin.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"linux@...linux.org.uk" <linux@...linux.org.uk>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"thomas.petazzoni@...tlin.com" <thomas.petazzoni@...tlin.com>,
"maxime.chevallier@...tlin.com" <maxime.chevallier@...tlin.com>,
"gregory.clement@...tlin.com" <gregory.clement@...tlin.com>,
"miquel.raynal@...tlin.com" <miquel.raynal@...tlin.com>,
Nadav Haklai <nadavh@...vell.com>,
Stefan Chulski <stefanc@...vell.com>,
"mw@...ihalf.com" <mw@...ihalf.com>
Subject: RE: [EXT] [PATCH net-next 07/15] net: mvpp2: fix the computation of
the RXQs
Some real and "potential" functionalities have been shrunk in September-2018
on transition from single-big file to the current split driver onto several files.
Regarding the MVPP2_DEFAULT_RXQ
Seems the current variant is flexible, permitting easy customize the configuration according to customer's needs.
Regarding the Queue in probe():
Looking into old code there where no2 queue-modes but 3:
enum mv_pp2_queue_distribution_mode {
MVPP2_QDIST_SINGLE_MODE,
MVPP2_QDIST_MULTI_MODE,
MVPP2_SINGLE_RESOURCE_MODE
};
The current if(MVPP2_QDIST_MULTI_MODE)else is correct also for the MVPP2_SINGLE_RESOURCE_MODE,
but new/patched isn't.
Since this patch doesn't change any functionality (right now) but reduces the flexibility I do not see real reason to apply it.
Regards
Yan Markman
-----Original Message-----
From: Antoine Tenart <antoine.tenart@...tlin.com>
Sent: Thursday, February 28, 2019 3:21 PM
To: davem@...emloft.net; linux@...linux.org.uk
Cc: Antoine Tenart <antoine.tenart@...tlin.com>; netdev@...r.kernel.org; linux-kernel@...r.kernel.org; thomas.petazzoni@...tlin.com; maxime.chevallier@...tlin.com; gregory.clement@...tlin.com; miquel.raynal@...tlin.com; Nadav Haklai <nadavh@...vell.com>; Stefan Chulski <stefanc@...vell.com>; Yan Markman <ymarkman@...vell.com>; mw@...ihalf.com
Subject: [EXT] [PATCH net-next 07/15] net: mvpp2: fix the computation of the RXQs
External Email
----------------------------------------------------------------------
The patch fixes the computation of RXQs being used by the PPv2 driver, which is set depending on the PPv2 engine version and the queue mode used. There are three cases:
- PPv2.1: 1 RXQ per CPU.
- PPV2.2 with MVPP2_QDIST_MULTI_MODE: 1 RXQ per CPU.
- PPv2.2 with MVPP2_QDIST_SINGLE_MODE: 1 RXQ is shared between the CPUs.
The PPv2 engine supports a maximum of 32 queues per port. This patch adds a check so that we do not overstep this maximum.
It appeared the calculation was broken for PPv2.1 engines since f8c6ba8424b0, as PPv2.1 ports ended up with a single RXQ while they needed 4. This patch fixes it.
Fixes: f8c6ba8424b0 ("net: mvpp2: use only one rx queue per port per CPU")
Signed-off-by: Antoine Tenart <antoine.tenart@...tlin.com>
---
drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 4 ++--
.../net/ethernet/marvell/mvpp2/mvpp2_main.c | 23 ++++++++++++-------
2 files changed, 17 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index 17ff330cce5f..687e011de5ef 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -549,8 +549,8 @@
#define MVPP2_MAX_TSO_SEGS 300
#define MVPP2_MAX_SKB_DESCS (MVPP2_MAX_TSO_SEGS * 2 + MAX_SKB_FRAGS)
-/* Default number of RXQs in use */
-#define MVPP2_DEFAULT_RXQ 1
+/* Max number of RXQs per port */
+#define MVPP2_PORT_MAX_RXQ 32
/* Max number of Rx descriptors */
#define MVPP2_MAX_RXD_MAX 1024
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 24cee6cbe309..9c6200a59910 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -4062,8 +4062,8 @@ static int mvpp2_multi_queue_vectors_init(struct mvpp2_port *port,
snprintf(irqname, sizeof(irqname), "hif%d", i);
if (queue_mode == MVPP2_QDIST_MULTI_MODE) {
- v->first_rxq = i * MVPP2_DEFAULT_RXQ;
- v->nrxqs = MVPP2_DEFAULT_RXQ;
+ v->first_rxq = i;
+ v->nrxqs = 1;
} else if (queue_mode == MVPP2_QDIST_SINGLE_MODE &&
i == (port->nqvecs - 1)) {
v->first_rxq = 0;
@@ -4156,8 +4156,7 @@ static int mvpp2_port_init(struct mvpp2_port *port)
MVPP2_MAX_PORTS * priv->max_port_rxqs)
return -EINVAL;
- if (port->nrxqs % MVPP2_DEFAULT_RXQ ||
- port->nrxqs > priv->max_port_rxqs || port->ntxqs > MVPP2_MAX_TXQ)
+ if (port->nrxqs > priv->max_port_rxqs || port->ntxqs > MVPP2_MAX_TXQ)
return -EINVAL;
/* Disable port */
@@ -4778,10 +4777,18 @@ static int mvpp2_port_probe(struct platform_device *pdev,
}
ntxqs = MVPP2_MAX_TXQ;
- if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_MULTI_MODE)
- nrxqs = MVPP2_DEFAULT_RXQ * num_possible_cpus();
- else
- nrxqs = MVPP2_DEFAULT_RXQ;
+ if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_SINGLE_MODE) {
+ nrxqs = 1;
+ } else {
+ /* According to the PPv2.2 datasheet and our experiments on
+ * PPv2.1, RX queues have an allocation granularity of 4 (when
+ * more than a single one on PPv2.2).
+ * Round up to nearest multiple of 4.
+ */
+ nrxqs = (num_possible_cpus() + 3) & ~0x3;
+ if (nrxqs > MVPP2_PORT_MAX_RXQ)
+ nrxqs = MVPP2_PORT_MAX_RXQ;
+ }
dev = alloc_etherdev_mqs(sizeof(*port), ntxqs, nrxqs);
if (!dev)
--
2.20.1
Powered by blists - more mailing lists