[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1471110171.744093863@decadent.org.uk>
Date: Sat, 13 Aug 2016 18:42:51 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Jack Morgenstein" <jackm@....mellanox.co.il>,
"Doug Ledford" <dledford@...hat.com>,
"Yishai Hadas" <yishaih@...lanox.com>,
"Leon Romanovsky" <leon@...nel.org>
Subject: [PATCH 3.16 216/305] IB/mlx4: Fix error flow when sending mads
under SRIOV
3.16.37-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Yishai Hadas <yishaih@...lanox.com>
commit a6100603a4a87fc436199362bdb81cb849faaf6e upstream.
Fix mad send error flow to prevent double freeing address handles,
and leaking tx_ring entries when SRIOV is active.
If ib_mad_post_send fails, the address handle pointer in the tx_ring entry
must be set to NULL (or there will be a double-free) and tx_tail must be
incremented (or there will be a leak of tx_ring entries).
The tx_ring is handled the same way in the send-completion handler.
Fixes: 37bfc7c1e83f ("IB/mlx4: SR-IOV multiplex and demultiplex MADs")
Signed-off-by: Yishai Hadas <yishaih@...lanox.com>
Reviewed-by: Jack Morgenstein <jackm@....mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@...nel.org>
Signed-off-by: Doug Ledford <dledford@...hat.com>
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
drivers/infiniband/hw/mlx4/mad.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -531,7 +531,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib
tun_tx_ix = (++tun_qp->tx_ix_head) & (MLX4_NUM_TUNNEL_BUFS - 1);
spin_unlock(&tun_qp->tx_lock);
if (ret)
- goto out;
+ goto end;
tun_mad = (struct mlx4_rcv_tunnel_mad *) (tun_qp->tx_ring[tun_tx_ix].buf.addr);
if (tun_qp->tx_ring[tun_tx_ix].ah)
@@ -600,9 +600,15 @@ int mlx4_ib_send_to_slave(struct mlx4_ib
wr.send_flags = IB_SEND_SIGNALED;
ret = ib_post_send(src_qp, &wr, &bad_wr);
-out:
- if (ret)
- ib_destroy_ah(ah);
+ if (!ret)
+ return 0;
+ out:
+ spin_lock(&tun_qp->tx_lock);
+ tun_qp->tx_ix_tail++;
+ spin_unlock(&tun_qp->tx_lock);
+ tun_qp->tx_ring[tun_tx_ix].ah = NULL;
+end:
+ ib_destroy_ah(ah);
return ret;
}
@@ -1256,9 +1262,15 @@ int mlx4_ib_send_to_wire(struct mlx4_ib_
ret = ib_post_send(send_qp, &wr, &bad_wr);
+ if (!ret)
+ return 0;
+
+ spin_lock(&sqp->tx_lock);
+ sqp->tx_ix_tail++;
+ spin_unlock(&sqp->tx_lock);
+ sqp->tx_ring[wire_tx_ix].ah = NULL;
out:
- if (ret)
- ib_destroy_ah(ah);
+ ib_destroy_ah(ah);
return ret;
}
Powered by blists - more mailing lists