[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1b7c95be-d3d9-53c3-3152-cd835314d37c@linux.ibm.com>
Date: Thu, 16 Feb 2023 14:49:15 +0100
From: Wenjia Zhang <wenjia@...ux.ibm.com>
To: "D. Wythe" <alibuda@...ux.alibaba.com>, kgraul@...ux.ibm.com,
jaka@...ux.ibm.com
Cc: kuba@...nel.org, davem@...emloft.net, netdev@...r.kernel.org,
linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: Re: [PATCH net v2] net/smc: fix application data exception
On 16.02.23 07:39, D. Wythe wrote:
> From: "D. Wythe" <alibuda@...ux.alibaba.com>
>
> There is a certain probability that following
> exceptions will occur in the wrk benchmark test:
>
> Running 10s test @ http://11.213.45.6:80
> 8 threads and 64 connections
> Thread Stats Avg Stdev Max +/- Stdev
> Latency 3.72ms 13.94ms 245.33ms 94.17%
> Req/Sec 1.96k 713.67 5.41k 75.16%
> 155262 requests in 10.10s, 23.10MB read
> Non-2xx or 3xx responses: 3
>
> We will find that the error is HTTP 400 error, which is a serious
> exception in our test, which means the application data was
> corrupted.
>
> Consider the following scenarios:
>
> CPU0 CPU1
>
> buf_desc->used = 0;
> cmpxchg(buf_desc->used, 0, 1)
> deal_with(buf_desc)
>
> memset(buf_desc->cpu_addr,0);
>
> This will cause the data received by a victim connection to be cleared,
> thus triggering an HTTP 400 error in the server.
>
> This patch exchange the order between clear used and memset, add
> barrier to ensure memory consistency.
>
> Fixes: 1c5526968e27 ("net/smc: Clear memory when release and reuse buffer")
> Signed-off-by: D. Wythe <alibuda@...ux.alibaba.com>
> ---
> v2: rebase it with latest net tree.
>
Reviewed-by: Wenjia Zhang <wenjia@...ux.ibm.com>
> net/smc/smc_core.c | 17 ++++++++---------
> 1 file changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
> index c305d8d..c19d4b7 100644
> --- a/net/smc/smc_core.c
> +++ b/net/smc/smc_core.c
> @@ -1120,8 +1120,9 @@ static void smcr_buf_unuse(struct smc_buf_desc *buf_desc, bool is_rmb,
>
> smc_buf_free(lgr, is_rmb, buf_desc);
> } else {
> - buf_desc->used = 0;
> - memset(buf_desc->cpu_addr, 0, buf_desc->len);
> + /* memzero_explicit provides potential memory barrier semantics */
> + memzero_explicit(buf_desc->cpu_addr, buf_desc->len);
> + WRITE_ONCE(buf_desc->used, 0);
> }
> }
>
> @@ -1132,19 +1133,17 @@ static void smc_buf_unuse(struct smc_connection *conn,
> if (!lgr->is_smcd && conn->sndbuf_desc->is_vm) {
> smcr_buf_unuse(conn->sndbuf_desc, false, lgr);
> } else {
> - conn->sndbuf_desc->used = 0;
> - memset(conn->sndbuf_desc->cpu_addr, 0,
> - conn->sndbuf_desc->len);
> + memzero_explicit(conn->sndbuf_desc->cpu_addr, conn->sndbuf_desc->len);
> + WRITE_ONCE(conn->sndbuf_desc->used, 0);
> }
> }
> if (conn->rmb_desc) {
> if (!lgr->is_smcd) {
> smcr_buf_unuse(conn->rmb_desc, true, lgr);
> } else {
> - conn->rmb_desc->used = 0;
> - memset(conn->rmb_desc->cpu_addr, 0,
> - conn->rmb_desc->len +
> - sizeof(struct smcd_cdc_msg));
> + memzero_explicit(conn->rmb_desc->cpu_addr,
> + conn->rmb_desc->len + sizeof(struct smcd_cdc_msg));
> + WRITE_ONCE(conn->rmb_desc->used, 0);
> }
> }
> }
Powered by blists - more mailing lists