[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8ccbfab0-e24f-b758-cd11-27b6d8ab1d48@intel.com>
Date: Tue, 1 Aug 2023 11:04:26 -0700
From: Jesse Brandeburg <jesse.brandeburg@...el.com>
To: Souradeep Chakrabarti <schakrabarti@...ux.microsoft.com>,
<kys@...rosoft.com>, <haiyangz@...rosoft.com>,
<wei.liu@...nel.org>, <decui@...rosoft.com>, <davem@...emloft.net>,
<edumazet@...gle.com>, <kuba@...nel.org>, <pabeni@...hat.com>,
<longli@...rosoft.com>, <sharmaajay@...rosoft.com>,
<leon@...nel.org>, <cai.huoqing@...ux.dev>,
<ssengar@...ux.microsoft.com>, <vkuznets@...hat.com>,
<tglx@...utronix.de>, <linux-hyperv@...r.kernel.org>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-rdma@...r.kernel.org>
CC: <schakrabarti@...rosoft.com>, <stable@...r.kernel.org>
Subject: Re: [PATCH V7 net] net: mana: Fix MANA VF unload when hardware is
On 8/1/2023 5:29 AM, Souradeep Chakrabarti wrote:
> When unloading the MANA driver, mana_dealloc_queues() waits for the MANA
> hardware to complete any inflight packets and set the pending send count
> to zero. But if the hardware has failed, mana_dealloc_queues()
> could wait forever.
>
> Fix this by adding a timeout to the wait. Set the timeout to 120 seconds,
tx timeout in stack defaults to 5 seconds, do you not have that on? What
happens when you start getting resets while unloading?
> which is a somewhat arbitrary value that is more than long enough for
> functional hardware to complete any sends.
I'd say 2 or 5 seconds is probably plenty of time to hang up a driver
unload.
>
> Cc: stable@...r.kernel.org
> Fixes: ca9c54d2d6a5 ("net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)")
>
> Signed-off-by: Souradeep Chakrabarti <schakrabarti@...ux.microsoft.com>
keep s-o-b and other trailers together please, no spaces, it messes up
git and doesn't conform to kernel standards.
> ---
> V6 -> V7:
> * Optimized the while loop for freeing skb.
>
> V5 -> V6:
> * Added pcie_flr to reset the pci after timeout.
> * Fixed the position of changelog.
> * Removed unused variable like cq.
>
> V4 -> V5:
> * Added fixes tag
> * Changed the usleep_range from static to incremental value.
> * Initialized timeout in the begining.
>
> V3 -> V4:
> * Removed the unnecessary braces from mana_dealloc_queues().
>
> V2 -> V3:
> * Removed the unnecessary braces from mana_dealloc_queues().
>
> V1 -> V2:
> * Added net branch
> * Removed the typecasting to (struct mana_context*) of void pointer
> * Repositioned timeout variable in mana_dealloc_queues()
> * Repositioned vf_unload_timeout in mana_context struct, to utilise the
> 6 bytes hole
> ---
> drivers/net/ethernet/microsoft/mana/mana_en.c | 37 +++++++++++++++++--
> 1 file changed, 33 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index a499e460594b..3c5552a176d0 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> @@ -8,6 +8,7 @@
> #include <linux/ethtool.h>
> #include <linux/filter.h>
> #include <linux/mm.h>
> +#include <linux/pci.h>
>
> #include <net/checksum.h>
> #include <net/ip6_checksum.h>
> @@ -2345,9 +2346,12 @@ int mana_attach(struct net_device *ndev)
> static int mana_dealloc_queues(struct net_device *ndev)
> {
> struct mana_port_context *apc = netdev_priv(ndev);
> + unsigned long timeout = jiffies + 120 * HZ;
> struct gdma_dev *gd = apc->ac->gdma_dev;
> struct mana_txq *txq;
> + struct sk_buff *skb;
> int i, err;
> + u32 tsleep;
>
> if (apc->port_is_up)
> return -EINVAL;
> @@ -2363,15 +2367,40 @@ static int mana_dealloc_queues(struct net_device *ndev)
> * to false, but it doesn't matter since mana_start_xmit() drops any
> * new packets due to apc->port_is_up being false.
> *
> - * Drain all the in-flight TX packets
> + * Drain all the in-flight TX packets.
> + * A timeout of 120 seconds for all the queues is used.
> + * This will break the while loop when h/w is not responding.
> + * This value of 120 has been decided here considering max
> + * number of queues.
> */
> +
> for (i = 0; i < apc->num_queues; i++) {
> txq = &apc->tx_qp[i].txq;
> -
> - while (atomic_read(&txq->pending_sends) > 0)
> - usleep_range(1000, 2000);
> + tsleep = 1000;
> + while (atomic_read(&txq->pending_sends) > 0 &&
> + time_before(jiffies, timeout)) {
> + usleep_range(tsleep, tsleep + 1000);
> + tsleep <<= 1;
> + }
> + if (atomic_read(&txq->pending_sends)) {
> + err = pcie_flr(to_pci_dev(gd->gdma_context->dev));
> + if (err) {
> + netdev_err(ndev, "flr failed %d with %d pkts pending in txq %u\n",
> + err, atomic_read(&txq->pending_sends),
> + txq->gdma_txq_id);
> + }
> + break;
> + }
> }
>
> + for (i = 0; i < apc->num_queues; i++) {
> + txq = &apc->tx_qp[i].txq;
> + while (skb = skb_dequeue(&txq->pending_skbs)) {
> + mana_unmap_skb(skb, apc);
> + dev_consume_skb_any(skb);
dev_kfree_skb_any() would be more correct here since this is an error
path and the transmit is presumed dropped, isn't it?
> + }
> + atomic_set(&txq->pending_sends, 0);
> + }
> /* We're 100% sure the queues can no longer be woken up, because
> * we're sure now mana_poll_tx_cq() can't be running.
> */
Powered by blists - more mailing lists