[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190829181547.8280-25-sashal@kernel.org>
Date: Thu, 29 Aug 2019 14:15:25 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Thomas Falcon <tlfalcon@...ux.ibm.com>,
Abdul Haleem <abdhalee@...ux.vnet.ibm.com>,
"Devesh K . Singh" <devesh_singh@...ibm.com>,
"David S . Miller" <davem@...emloft.net>,
Sasha Levin <sashal@...nel.org>, netdev@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org
Subject: [PATCH AUTOSEL 4.19 25/45] ibmvnic: Unmap DMA address of TX descriptor buffers after use
From: Thomas Falcon <tlfalcon@...ux.ibm.com>
[ Upstream commit 80f0fe0934cd3daa13a5e4d48a103f469115b160 ]
There's no need to wait until a completion is received to unmap
TX descriptor buffers that have been passed to the hypervisor.
Instead unmap it when the hypervisor call has completed. This patch
avoids the possibility that a buffer will not be unmapped because
a TX completion is lost or mishandled.
Reported-by: Abdul Haleem <abdhalee@...ux.vnet.ibm.com>
Tested-by: Devesh K. Singh <devesh_singh@...ibm.com>
Signed-off-by: Thomas Falcon <tlfalcon@...ux.ibm.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/net/ethernet/ibm/ibmvnic.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 0ae43d27cdcff..255de7d68cd33 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -1586,6 +1586,8 @@ static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
(u64)tx_buff->indir_dma,
(u64)num_entries);
+ dma_unmap_single(dev, tx_buff->indir_dma,
+ sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
} else {
tx_buff->num_entries = num_entries;
lpar_rc = send_subcrq(adapter, handle_array[queue_num],
@@ -2747,7 +2749,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
union sub_crq *next;
int index;
int i, j;
- u8 *first;
restart_loop:
while (pending_scrq(adapter, scrq)) {
@@ -2777,14 +2778,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
txbuff->data_dma[j] = 0;
}
- /* if sub_crq was sent indirectly */
- first = &txbuff->indir_arr[0].generic.first;
- if (*first == IBMVNIC_CRQ_CMD) {
- dma_unmap_single(dev, txbuff->indir_dma,
- sizeof(txbuff->indir_arr),
- DMA_TO_DEVICE);
- *first = 0;
- }
if (txbuff->last_frag) {
dev_kfree_skb_any(txbuff->skb);
--
2.20.1
Powered by blists - more mailing lists