[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190708150528.272739806@linuxfoundation.org>
Date: Mon, 8 Jul 2019 17:12:29 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Dennis Dalessandro <dennis.dalessandro@...el.com>,
Mike Marciniszyn <mike.marciniszyn@...el.com>,
Doug Ledford <dledford@...hat.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 4.9 036/102] IB/hfi1: Avoid hardlockup with flushlist_lock
commit cf131a81967583ae737df6383a0893b9fee75b4e upstream.
Heavy contention of the sde flushlist_lock can cause hard lockups at
extreme scale when the flushing logic is under stress.
Mitigate by replacing the item at a time copy to the local list with
an O(1) list_splice_init() and using the high priority work queue to
do the flushes.
Ported to linux-4.9.y.
Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Cc: <stable@...r.kernel.org>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@...el.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@...el.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@...el.com>
Signed-off-by: Doug Ledford <dledford@...hat.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/infiniband/hw/hfi1/sdma.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index 9cbe52d21077..76e63c88a87a 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -410,10 +410,7 @@ static void sdma_flush(struct sdma_engine *sde)
sdma_flush_descq(sde);
spin_lock_irqsave(&sde->flushlist_lock, flags);
/* copy flush list */
- list_for_each_entry_safe(txp, txp_next, &sde->flushlist, list) {
- list_del_init(&txp->list);
- list_add_tail(&txp->list, &flushlist);
- }
+ list_splice_init(&sde->flushlist, &flushlist);
spin_unlock_irqrestore(&sde->flushlist_lock, flags);
/* flush from flush list */
list_for_each_entry_safe(txp, txp_next, &flushlist, list)
@@ -2406,7 +2403,7 @@ int sdma_send_txreq(struct sdma_engine *sde,
wait->tx_count++;
wait->count += tx->num_desc;
}
- schedule_work(&sde->flush_worker);
+ queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
ret = -ECOMM;
goto unlock;
nodesc:
@@ -2504,7 +2501,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait *wait,
}
}
spin_unlock(&sde->flushlist_lock);
- schedule_work(&sde->flush_worker);
+ queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
ret = -ECOMM;
goto update_tail;
nodesc:
--
2.20.1
Powered by blists - more mailing lists