lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Jan 2021 00:17:08 -0600
From:   Lijun Pan <ljp@...ux.ibm.com>
To:     netdev@...r.kernel.org
Cc:     Lijun Pan <ljp@...ux.ibm.com>
Subject: [PATCH net 1/3] ibmvnic: rework to ensure SCRQ entry reads are properly ordered

Move the dma_rmb() between pending_scrq() and ibmvnic_next_scrq()
into the end of pending_scrq(), and explain why.
Explain in detail why the dma_rmb() is placed at the end of
ibmvnic_next_scrq().

Fixes: b71ec9522346 ("ibmvnic: Ensure that SCRQ entry reads are correctly ordered")
Signed-off-by: Lijun Pan <ljp@...ux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 41 +++++++++++++++++-------------
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 9778c83150f1..8e043683610f 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2511,12 +2511,6 @@ static int ibmvnic_poll(struct napi_struct *napi, int budget)
 
 		if (!pending_scrq(adapter, rx_scrq))
 			break;
-		/* The queue entry at the current index is peeked at above
-		 * to determine that there is a valid descriptor awaiting
-		 * processing. We want to be sure that the current slot
-		 * holds a valid descriptor before reading its contents.
-		 */
-		dma_rmb();
 		next = ibmvnic_next_scrq(adapter, rx_scrq);
 		rx_buff =
 		    (struct ibmvnic_rx_buff *)be64_to_cpu(next->
@@ -3256,13 +3250,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
 		int total_bytes = 0;
 		int num_packets = 0;
 
-		/* The queue entry at the current index is peeked at above
-		 * to determine that there is a valid descriptor awaiting
-		 * processing. We want to be sure that the current slot
-		 * holds a valid descriptor before reading its contents.
-		 */
-		dma_rmb();
-
 		next = ibmvnic_next_scrq(adapter, scrq);
 		for (i = 0; i < next->tx_comp.num_comps; i++) {
 			if (next->tx_comp.rcs[i])
@@ -3636,11 +3623,25 @@ static int pending_scrq(struct ibmvnic_adapter *adapter,
 			struct ibmvnic_sub_crq_queue *scrq)
 {
 	union sub_crq *entry = &scrq->msgs[scrq->cur];
+	int rc;
 
 	if (entry->generic.first & IBMVNIC_CRQ_CMD_RSP)
-		return 1;
+		rc = 1;
 	else
-		return 0;
+		rc = 0;
+
+	/* Ensure that the entire SCRQ descriptor scrq->msgs
+	 * has been loaded before reading its contents.
+	 * This barrier makes sure this function's entry, esp.
+	 * entry->generic.first & IBMVNIC_CRQ_CMD_RSP
+	 * 1. is loaded before ibmvnic_next_scrq()'s
+	 * entry->generic.first & IBMVNIC_CRQ_CMD_RSP;
+	 * 2. OR is loaded before ibmvnic_poll()'s
+	 * disable_scrq_irq()'s scrq->hw_irq.
+	 */
+	dma_rmb();
+
+	return rc;
 }
 
 static union sub_crq *ibmvnic_next_scrq(struct ibmvnic_adapter *adapter,
@@ -3659,8 +3660,14 @@ static union sub_crq *ibmvnic_next_scrq(struct ibmvnic_adapter *adapter,
 	}
 	spin_unlock_irqrestore(&scrq->lock, flags);
 
-	/* Ensure that the entire buffer descriptor has been
-	 * loaded before reading its contents
+	/* Ensure that the entire SCRQ descriptor scrq->msgs
+	 * has been loaded before reading its contents.
+	 * This barrier makes sure this function's entry, esp.
+	 * entry->generic.first & IBMVNIC_CRQ_CMD_RSP
+	 * 1. is loaded before ibmvnic_poll()'s
+	 * be64_to_cpu(next->rx_comp.correlator);
+	 * 2. OR is loaded before ibmvnic_complet_tx()'s
+	 * be32_to_cpu(next->tx_comp.correlators[i]).
 	 */
 	dma_rmb();
 
-- 
2.23.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ