[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y4ea7stn.fsf@free-electrons.com>
Date: Fri, 06 Nov 2015 21:53:40 +0100
From: Gregory CLEMENT <gregory.clement@...e-electrons.com>
To: Marcin Wojtas <mw@...ihalf.com>
Cc: "David S. Miller" <davem@...emloft.net>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>,
Jason Cooper <jason@...edaemon.net>,
Andrew Lunn <andrew@...n.ch>,
Sebastian Hesselbarth <sebastian.hesselbarth@...il.com>,
Ezequiel Garcia <ezequiel.garcia@...e-electrons.com>,
"linux-arm-kernel\@lists.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Lior Amsalem <alior@...vell.com>,
Nadav Haklai <nadavh@...vell.com>,
Simon Guinot <simon.guinot@...uanux.org>,
Maxime Ripard <maxime.ripard@...e-electrons.com>,
Boris BREZILLON <boris.brezillon@...e-electrons.com>,
Russell King - ARM Linux <linux@....linux.org.uk>,
Willy Tarreau <w@....eu>
Subject: Re: [RFC PATCH 2/2] net: mvneta: Add naive RSS support
Hi Marcin,
[...]
>> +static int mvneta_config_rss(struct mvneta_port *pp)
>> +{
>> + int cpu;
>> + u32 val;
>> +
>> + netif_tx_stop_all_queues(pp->dev);
>> +
>> + /* Mask all ethernet port interrupts */
>> + mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
>
> Shouldn't the interrupts be masked on each online cpu? There is percpu
> unmask function (mvneta_percpu_unmask_interrupt), so maybe ther should
> be also mvneta_percpu_mask_interrupt. With this masking should look
> like below:
>
> for_each_online_cpu(cpu)
> smp_call_function_single(cpu, mvneta_percpu_unmask_interrupt,
> pp, true);
Indeed you are right, however I am a bit surprised to not had had issue
cause by this. I will fix it.
>
>> + mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
>> + mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
>> +
>> + /* We have to synchronise on the napi of each CPU */
>> + for_each_online_cpu(cpu) {
>> + struct mvneta_pcpu_port *pcpu_port =
>> + per_cpu_ptr(pp->ports, cpu);
>> +
>> + napi_synchronize(&pcpu_port->napi);
>> + napi_disable(&pcpu_port->napi);
>> + }
>> +
>> + pp->rxq_def = pp->indir[0];
>> +
>> + /* update unicast mapping */
>> + mvneta_set_rx_mode(pp->dev);
>> +
>> + /* Update val of portCfg register accordingly with all RxQueue types */
>> + val = MVNETA_PORT_CONFIG_DEFL_VALUE(pp->rxq_def);
>> + mvreg_write(pp, MVNETA_PORT_CONFIG, val);
>> +
>> + /* Update the elected CPU matching the new rxq_def */
>> + mvneta_percpu_elect(pp);
>> +
>> + /* We have to synchronise on the napi of each CPU */
>> + for_each_online_cpu(cpu) {
>> + struct mvneta_pcpu_port *pcpu_port =
>> + per_cpu_ptr(pp->ports, cpu);
>> +
>> + napi_enable(&pcpu_port->napi);
>> + }
>> +
>
> rxq_def changed, but txq vs CPU mapping remained as in the beginning -
> is it intentional?
txq vs CPU mapping is change in the mvneta_percpu_elect() function.
Thanks for this prompt review
Gregory
--
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists