[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180821055021.418467416@linuxfoundation.org>
Date: Tue, 21 Aug 2018 08:21:00 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Jisheng Zhang <Jisheng.Zhang@...aptics.com>,
Andrew Lunn <andrew@...n.ch>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 4.18 34/35] net: mvneta: fix mvneta_config_rss on armada 3700
4.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jisheng Zhang <Jisheng.Zhang@...aptics.com>
[ Upstream commit 0f5c6c30a0f8c629b92ecdaef61b315c43fde10a ]
The mvneta Ethernet driver is used on a few different Marvell SoCs.
Some SoCs have per cpu interrupts for Ethernet events, the driver uses
a per CPU napi structure for this case. Some SoCs such as armada 3700
have a single interrupt for Ethernet events, the driver uses a global
napi structure for this case.
Current mvneta_config_rss() always operates the per cpu napi structure.
Fix it by operating a global napi for "single interrupt" case, and per
cpu napi structure for remaining cases.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@...aptics.com>
Fixes: 2636ac3cc2b4 ("net: mvneta: Add network support for Armada 3700 SoC")
Reviewed-by: Andrew Lunn <andrew@...n.ch>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/net/ethernet/marvell/mvneta.c | 35 +++++++++++++++++++++-------------
1 file changed, 22 insertions(+), 13 deletions(-)
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -4020,13 +4020,18 @@ static int mvneta_config_rss(struct mvn
on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
- /* We have to synchronise on the napi of each CPU */
- for_each_online_cpu(cpu) {
- struct mvneta_pcpu_port *pcpu_port =
- per_cpu_ptr(pp->ports, cpu);
-
- napi_synchronize(&pcpu_port->napi);
- napi_disable(&pcpu_port->napi);
+ if (!pp->neta_armada3700) {
+ /* We have to synchronise on the napi of each CPU */
+ for_each_online_cpu(cpu) {
+ struct mvneta_pcpu_port *pcpu_port =
+ per_cpu_ptr(pp->ports, cpu);
+
+ napi_synchronize(&pcpu_port->napi);
+ napi_disable(&pcpu_port->napi);
+ }
+ } else {
+ napi_synchronize(&pp->napi);
+ napi_disable(&pp->napi);
}
pp->rxq_def = pp->indir[0];
@@ -4043,12 +4048,16 @@ static int mvneta_config_rss(struct mvn
mvneta_percpu_elect(pp);
spin_unlock(&pp->lock);
- /* We have to synchronise on the napi of each CPU */
- for_each_online_cpu(cpu) {
- struct mvneta_pcpu_port *pcpu_port =
- per_cpu_ptr(pp->ports, cpu);
-
- napi_enable(&pcpu_port->napi);
+ if (!pp->neta_armada3700) {
+ /* We have to synchronise on the napi of each CPU */
+ for_each_online_cpu(cpu) {
+ struct mvneta_pcpu_port *pcpu_port =
+ per_cpu_ptr(pp->ports, cpu);
+
+ napi_enable(&pcpu_port->napi);
+ }
+ } else {
+ napi_enable(&pp->napi);
}
netif_tx_start_all_queues(pp->dev);
Powered by blists - more mailing lists