[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1697009600-22367-3-git-send-email-alibuda@linux.alibaba.com>
Date: Wed, 11 Oct 2023 15:33:17 +0800
From: "D. Wythe" <alibuda@...ux.alibaba.com>
To: kgraul@...ux.ibm.com,
wenjia@...ux.ibm.com,
jaka@...ux.ibm.com,
wintera@...ux.ibm.com
Cc: kuba@...nel.org,
davem@...emloft.net,
netdev@...r.kernel.org,
linux-s390@...r.kernel.org,
linux-rdma@...r.kernel.org,
"D. Wythe" <alibuda@...ux.alibaba.com>,
Heiko Carstens <hca@...ux.ibm.com>
Subject: [PATCH net 2/5] net/smc: fix incorrect barrier usage
From: "D. Wythe" <alibuda@...ux.alibaba.com>
This patch add explicit CPU barrier to ensure memory
consistency rather than compiler barrier.
Besides, the atomicity between READ_ONCE and cmpxhcg cannot
be guaranteed, so we need to use atomic ops. The simple way
is to replace READ_ONCE with xchg.
Fixes: 475f9ff63ee8 ("net/smc: fix application data exception")
Co-developed-by: Heiko Carstens <hca@...ux.ibm.com>
Signed-off-by: Heiko Carstens <hca@...ux.ibm.com>
Signed-off-by: D. Wythe <alibuda@...ux.alibaba.com>
Links: https://lore.kernel.org/netdev/1b7c95be-d3d9-53c3-3152-cd835314d37c@linux.ibm.com/T/
---
net/smc/smc_core.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index d520ee6..cc7d72e 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -1133,9 +1133,10 @@ static void smcr_buf_unuse(struct smc_buf_desc *buf_desc, bool is_rmb,
smc_buf_free(lgr, is_rmb, buf_desc);
} else {
- /* memzero_explicit provides potential memory barrier semantics */
- memzero_explicit(buf_desc->cpu_addr, buf_desc->len);
- WRITE_ONCE(buf_desc->used, 0);
+ memset(buf_desc->cpu_addr, 0, buf_desc->len);
+ /* make sure buf_desc->used not be reordered ahead */
+ smp_mb__before_atomic();
+ xchg(&buf_desc->used, 0);
}
}
@@ -1146,17 +1147,21 @@ static void smc_buf_unuse(struct smc_connection *conn,
if (!lgr->is_smcd && conn->sndbuf_desc->is_vm) {
smcr_buf_unuse(conn->sndbuf_desc, false, lgr);
} else {
- memzero_explicit(conn->sndbuf_desc->cpu_addr, conn->sndbuf_desc->len);
- WRITE_ONCE(conn->sndbuf_desc->used, 0);
+ memset(conn->sndbuf_desc->cpu_addr, 0, conn->sndbuf_desc->len);
+ /* make sure buf_desc->used not be reordered ahead */
+ smp_mb__before_atomic();
+ xchg(&conn->sndbuf_desc->used, 0);
}
}
if (conn->rmb_desc) {
if (!lgr->is_smcd) {
smcr_buf_unuse(conn->rmb_desc, true, lgr);
} else {
- memzero_explicit(conn->rmb_desc->cpu_addr,
- conn->rmb_desc->len + sizeof(struct smcd_cdc_msg));
- WRITE_ONCE(conn->rmb_desc->used, 0);
+ memset(conn->rmb_desc->cpu_addr, 0,
+ conn->rmb_desc->len + sizeof(struct smcd_cdc_msg));
+ /* make sure buf_desc->used not be reordered ahead */
+ smp_mb__before_atomic();
+ xchg(&conn->rmb_desc->used, 0);
}
}
}
--
1.8.3.1
Powered by blists - more mailing lists