[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN8PR12MB326638B0BA74DA762C89DF54D3F90@BN8PR12MB3266.namprd12.prod.outlook.com>
Date: Mon, 1 Jul 2019 10:15:17 +0000
From: Jose Abreu <Jose.Abreu@...opsys.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Jose Abreu <Jose.Abreu@...opsys.com>
CC: linux-kernel <linux-kernel@...r.kernel.org>,
Network Development <netdev@...r.kernel.org>,
Joao Pinto <Joao.Pinto@...opsys.com>,
"David S . Miller" <davem@...emloft.net>,
Giuseppe Cavallaro <peppe.cavallaro@...com>,
Alexandre Torgue <alexandre.torgue@...com>
Subject: RE: [PATCH net-next v2 06/10] net: stmmac: Do not disable interrupts
when cleaning TX
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
> By the
>
> if ((status & handle_rx) && (chan < priv->plat->rx_queues_to_use)) {
> stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
> napi_schedule_irqoff(&ch->rx_napi);
> }
>
> branch directly above? If so, is it possible to have fewer rx than tx
> queues and miss this?
Yes, it is possible.
> this logic seems more complex than needed?
>
> if (status)
> status |= handle_rx | handle_tx;
>
> if ((status & handle_rx) && (chan < priv->plat->rx_queues_to_use)) {
>
> }
>
> if ((status & handle_tx) && (chan < priv->plat->tx_queues_to_use)) {
>
> }
>
> status & handle_rx implies status & handle_tx and vice versa.
This is removed in patch 09/10.
> > - if (work_done < budget && napi_complete_done(napi, work_done))
> > - stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
> > + if (work_done < budget)
> > + napi_complete_done(napi, work_done);
>
> It does seem odd that stmmac_napi_poll_rx and stmmac_napi_poll_tx both
> call stmmac_enable_dma_irq(..) independent of the other. Shouldn't the
> IRQ remain masked while either is active or scheduled? That is almost
> what this patch does, though not exactly.
After patch 09/10 the interrupts will only be disabled by RX NAPI and
re-enabled by it again. I can do some tests on whether disabling
interrupts independently gives more performance but I wouldn't expect so
because the real bottleneck when I do iperf3 tests is the RX path ...
Powered by blists - more mailing lists