[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wos9ik1k.fsf@bootlin.com>
Date: Wed, 29 Aug 2018 11:44:39 +0200
From: Gregory CLEMENT <gregory.clement@...tlin.com>
To: Jisheng Zhang <Jisheng.Zhang@...aptics.com>
Cc: <thomas.petazzoni@...tlin.com>,
"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Andrew Lunn <andrew@...n.ch>,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 5/5] net: mvneta: reduce smp_processor_id() calling in mvneta_tx_done_gbe
Hi Jisheng,
On mer., août 29 2018, Jisheng Zhang <Jisheng.Zhang@...aptics.com> wrote:
> In the loop of mvneta_tx_done_gbe(), we call the smp_processor_id()
> each time, move the call out of the loop to optimize the code a bit.
>
> Before the patch, the loop looks like(under arm64):
>
> ldr x1, [x29,#120]
> ...
> ldr w24, [x1,#36]
> ...
> bl 0 <_raw_spin_lock>
> str w24, [x27,#132]
> ...
>
> After the patch, the loop looks like(under arm64):
>
> ...
> bl 0 <_raw_spin_lock>
> str w23, [x28,#132]
> ...
> where w23 is loaded so be ready before the loop.
>
> From another side, mvneta_tx_done_gbe() is called from mvneta_poll()
> which is in non-preemptible context, so it's safe to call the
> smp_processor_id() function once.
This improvement should go to net-next. Besides this patch looks nice:
Reviewed-by: Gregory CLEMENT <gregory.clement@...tlin.com>
Thanks,
Gregory
>
> Signed-off-by: Jisheng Zhang <Jisheng.Zhang@...aptics.com>
> ---
> drivers/net/ethernet/marvell/mvneta.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 7d98f7828a30..62e81e267e13 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -2507,12 +2507,13 @@ static void mvneta_tx_done_gbe(struct mvneta_port *pp, u32 cause_tx_done)
> {
> struct mvneta_tx_queue *txq;
> struct netdev_queue *nq;
> + int cpu = smp_processor_id();
>
> while (cause_tx_done) {
> txq = mvneta_tx_done_policy(pp, cause_tx_done);
>
> nq = netdev_get_tx_queue(pp->dev, txq->id);
> - __netif_tx_lock(nq, smp_processor_id());
> + __netif_tx_lock(nq, cpu);
>
> if (txq->count)
> mvneta_txq_done(pp, txq);
> --
> 2.18.0
>
--
Gregory Clement, Bootlin
Embedded Linux and Kernel engineering
http://bootlin.com
Powered by blists - more mailing lists