[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3f42845aafce14dcd96a83690fe296eb9eb6b50d.camel@mediatek.com>
Date: Tue, 28 Jun 2022 13:44:31 +0800
From: Biao Huang <biao.huang@...iatek.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: David Miller <davem@...emloft.net>,
Rob Herring <robh+dt@...nel.org>,
Bartosz Golaszewski <brgl@...ev.pl>,
Fabien Parent <fparent@...libre.com>,
Felix Fietkau <nbd@....name>, John Crispin <john@...ozen.org>,
Sean Wang <sean.wang@...iatek.com>,
Mark Lee <Mark-MC.Lee@...iatek.com>,
"Matthias Brugger" <matthias.bgg@...il.com>,
<netdev@...r.kernel.org>, <devicetree@...r.kernel.org>,
<linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-mediatek@...ts.infradead.org>,
Yinghua Pan <ot_yinghua.pan@...iatek.com>,
<srv_heupstream@...iatek.com>,
Macpaul Lin <macpaul.lin@...iatek.com>
Subject: Re: [PATCH net-next v3 09/10] net: ethernet: mtk-star-emac:
separate tx/rx handling with two NAPIs
Dear Jakub,
Thanks for your comments~
On Thu, 2022-06-23 at 21:34 -0700, Jakub Kicinski wrote:
> On Wed, 22 Jun 2022 17:05:44 +0800 Biao Huang wrote:
> > + if (rx || tx) {
> > + spin_lock_irqsave(&priv->lock, flags);
> > + /* mask Rx and TX Complete interrupt */
> > + mtk_star_disable_dma_irq(priv, rx, tx);
> > + spin_unlock_irqrestore(&priv->lock, flags);
>
> You do _irqsave / _irqrestore here
We should invoke spin_lock, no need save/store irq here.
>
> > + if (rx)
> > + __napi_schedule_irqoff(&priv->rx_napi);
> > + if (tx)
> > + __napi_schedule_irqoff(&priv->tx_napi);
>
> Yet assume _irqoff here.
>
> So can this be run from non-IRQ context or not?
seems __napi_schedule is more proper for our case, we'll modify it in
next send.
>
> > - if (mtk_star_ring_full(ring))
> > + if (unlikely(mtk_star_tx_ring_avail(ring) < MAX_SKB_FRAGS + 1))
> > netif_stop_queue(ndev);
>
> Please look around other drivers (like ixgbe) and copy the way they
> handle safe stopping of the queues. You need to add some barriers and
> re-check after disabling.
Yes, we look drivers from other vendors, and will do similar thing in
next send.
>
> > - spin_unlock_bh(&priv->lock);
> > -
> > mtk_star_dma_resume_tx(priv);
> >
> > return NETDEV_TX_OK;
>
>
> > + while ((entry != head) && (count < MTK_STAR_RING_NUM_DESCS -
> > 1)) {
> >
>
> Parenthesis unnecessary, so is the empty line after the while ().
Yes, the empty line will be removed in next send.
>
> > ret = mtk_star_tx_complete_one(priv);
> > if (ret < 0)
> > break;
> > +
> > + count++;
> > + pkts_compl++;
> > + bytes_compl += ret;
> > + entry = ring->tail;
> > }
> >
> > + __netif_tx_lock_bh(netdev_get_tx_queue(priv->ndev, 0));
> > netdev_completed_queue(ndev, pkts_compl, bytes_compl);
> > + __netif_tx_unlock_bh(netdev_get_tx_queue(priv->ndev, 0));
>
> what are you taking this lock for?
In this version, we encounter some issue related to
__QUEUE_STATE_STACK_OFF,
and if we add __netif_tx_lock_bh here, it disappears.
When recieve your comments, we survey netdev_completed_queue handles in
drivers from other vendors, we beleive the __QUEUE_STATE_STACK_OFF
issue may caused by unproper usage of __napi_schedule_irqoff in
previous lines, and we'll remove __netif_tx_lock_bh, and have another
try.
If our local stress test pass, corresponding modification will be added
in next send.
>
> > - if (wake && netif_queue_stopped(ndev))
> > + if (unlikely(netif_queue_stopped(ndev)) &&
> > + (mtk_star_tx_ring_avail(ring) > MTK_STAR_TX_THRESH))
> > netif_wake_queue(ndev);
> >
> > - spin_unlock(&priv->lock);
> > + if (napi_complete(napi)) {
> > + spin_lock_irqsave(&priv->lock, flags);
> > + mtk_star_enable_dma_irq(priv, false, true);
> > + spin_unlock_irqrestore(&priv->lock, flags);
> > + }
> > +
> > + return 0;
> > }
> > @@ -1475,6 +1514,7 @@ static int mtk_star_set_timing(struct
> > mtk_star_priv *priv)
> >
> > return regmap_write(priv->regs, MTK_STAR_REG_TEST0, delay_val);
> > }
> > +
> > static int mtk_star_probe(struct platform_device *pdev)
> > {
> > struct device_node *of_node;
>
> spurious whitespace change
Yes, will fix it in next send.
Best Regards!
Biao
Powered by blists - more mailing lists