[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKOOJTzO+QCZevo=LikLE-0tpkCZUWM3=zS3dpTbQS4dNtzhAQ@mail.gmail.com>
Date: Wed, 11 Aug 2021 13:13:22 -0700
From: Edwin Peer <edwin.peer@...adcom.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "David S . Miller" <davem@...emloft.net>,
Michael Chan <michael.chan@...adcom.com>,
Jeffrey Huang <huangjw@...adcom.com>,
Eddie Wai <eddie.wai@...adcom.com>,
Prashant Sreedharan <prashant@...adcom.com>,
Andrew Gospodarek <gospo@...adcom.com>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net 1/4] bnxt: don't lock the tx queue from napi poll
On Wed, Aug 11, 2021 at 12:32 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> We can't take the tx lock from the napi poll routine, because
> netpoll can poll napi at any moment, including with the tx lock
> already held.
>
> It seems that the tx lock is only protecting against the disable
> path, appropriate barriers are already in place to make sure
> cleanup can safely run concurrently with start_xmit. I don't see
> any other reason why 'stopped && avail > thresh' needs to be
> re-checked under the lock.
>
> Remove the tx lock and use synchronize_net() to make sure
> closing the device does not race we restarting the queues.
> Annotate accesses to dev_state against data races.
>
> Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.")
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 21 ++++++++++-----------
> 1 file changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index 865fcb8cf29f..07827d6b0fec 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> @@ -730,15 +730,10 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
> */
> smp_mb();
>
> - if (unlikely(netif_tx_queue_stopped(txq)) &&
> - (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh)) {
> - __netif_tx_lock(txq, smp_processor_id());
> - if (netif_tx_queue_stopped(txq) &&
> - bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
> - txr->dev_state != BNXT_DEV_STATE_CLOSING)
> - netif_tx_wake_queue(txq);
> - __netif_tx_unlock(txq);
> - }
> + if (netif_tx_queue_stopped(txq) &&
Probably worth retaining the unlikely() that the original outer check had.
> + bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
> + READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING)
> + netif_tx_wake_queue(txq);
> }
>
> static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
> @@ -9264,9 +9259,11 @@ void bnxt_tx_disable(struct bnxt *bp)
> if (bp->tx_ring) {
> for (i = 0; i < bp->tx_nr_rings; i++) {
> txr = &bp->tx_ring[i];
> - txr->dev_state = BNXT_DEV_STATE_CLOSING;
> + WRITE_ONCE(txr->dev_state, BNXT_DEV_STATE_CLOSING);
> }
> }
> + /* Make sure napi polls see @dev_state change */
> + synchronize_net();
> /* Drop carrier first to prevent TX timeout */
> netif_carrier_off(bp->dev);
> /* Stop all TX queues */
> @@ -9280,8 +9277,10 @@ void bnxt_tx_enable(struct bnxt *bp)
>
> for (i = 0; i < bp->tx_nr_rings; i++) {
> txr = &bp->tx_ring[i];
> - txr->dev_state = 0;
> + WRITE_ONCE(txr->dev_state, 0);
> }
> + /* Make sure napi polls see @dev_state change */
> + synchronize_net();
> netif_tx_wake_all_queues(bp->dev);
> if (bp->link_info.link_up)
> netif_carrier_on(bp->dev);
> --
> 2.31.1
Regards,
Edwin Peer
Powered by blists - more mailing lists