[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190624092321.455363908@linuxfoundation.org>
Date: Mon, 24 Jun 2019 17:55:48 +0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Gary Leshner <Gary.S.Leshner@...el.com>,
Mike Marciniszyn <mike.marciniszyn@...el.com>,
Dennis Dalessandro <dennis.dalessandro@...el.com>,
Jason Gunthorpe <jgg@...lanox.com>
Subject: [PATCH 5.1 016/121] IB/hfi1: Close PSM sdma_progress sleep window
From: Mike Marciniszyn <mike.marciniszyn@...el.com>
commit da9de5f8527f4b9efc82f967d29a583318c034c7 upstream.
The call to sdma_progress() is called outside the wait lock.
In this case, there is a race condition where sdma_progress() can return
false and the sdma_engine can idle. If that happens, there will be no
more sdma interrupts to cause the wakeup and the user_sdma xmit will hang.
Fix by moving the lock to enclose the sdma_progress() call.
Also, delete busycount. The need for this was removed by:
commit bcad29137a97 ("IB/hfi1: Serve the most starved iowait entry first")
Cc: <stable@...r.kernel.org>
Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Reviewed-by: Gary Leshner <Gary.S.Leshner@...el.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@...el.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@...el.com>
Signed-off-by: Jason Gunthorpe <jgg@...lanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/infiniband/hw/hfi1/user_sdma.c | 12 ++++--------
drivers/infiniband/hw/hfi1/user_sdma.h | 1 -
2 files changed, 4 insertions(+), 9 deletions(-)
--- a/drivers/infiniband/hw/hfi1/user_sdma.c
+++ b/drivers/infiniband/hw/hfi1/user_sdma.c
@@ -130,20 +130,16 @@ static int defer_packet_queue(
{
struct hfi1_user_sdma_pkt_q *pq =
container_of(wait->iow, struct hfi1_user_sdma_pkt_q, busy);
- struct user_sdma_txreq *tx =
- container_of(txreq, struct user_sdma_txreq, txreq);
- if (sdma_progress(sde, seq, txreq)) {
- if (tx->busycount++ < MAX_DEFER_RETRY_COUNT)
- goto eagain;
- }
+ write_seqlock(&sde->waitlock);
+ if (sdma_progress(sde, seq, txreq))
+ goto eagain;
/*
* We are assuming that if the list is enqueued somewhere, it
* is to the dmawait list since that is the only place where
* it is supposed to be enqueued.
*/
xchg(&pq->state, SDMA_PKT_Q_DEFERRED);
- write_seqlock(&sde->waitlock);
if (list_empty(&pq->busy.list)) {
iowait_get_priority(&pq->busy);
iowait_queue(pkts_sent, &pq->busy, &sde->dmawait);
@@ -151,6 +147,7 @@ static int defer_packet_queue(
write_sequnlock(&sde->waitlock);
return -EBUSY;
eagain:
+ write_sequnlock(&sde->waitlock);
return -EAGAIN;
}
@@ -804,7 +801,6 @@ static int user_sdma_send_pkts(struct us
tx->flags = 0;
tx->req = req;
- tx->busycount = 0;
INIT_LIST_HEAD(&tx->list);
/*
--- a/drivers/infiniband/hw/hfi1/user_sdma.h
+++ b/drivers/infiniband/hw/hfi1/user_sdma.h
@@ -245,7 +245,6 @@ struct user_sdma_txreq {
struct list_head list;
struct user_sdma_request *req;
u16 flags;
- unsigned int busycount;
u16 seqnum;
};
Powered by blists - more mailing lists