[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221108133847.2304539-2-xuhaoyue1@hisilicon.com>
Date: Tue, 8 Nov 2022 21:38:46 +0800
From: Haoyue Xu <xuhaoyue1@...ilicon.com>
To: <jgg@...dia.com>, <leon@...nel.org>
CC: <linux-rdma@...r.kernel.org>, <linuxarm@...wei.com>,
<linux-kernel@...r.kernel.org>, <xuhaoyue1@...ilicon.com>
Subject: [PATCH v3 for-rc 1/2] RDMA/hns: Fix ext_sge num error when post send
From: Luoyouming <luoyouming@...wei.com>
In the HNS ROCE driver, The sge is divided into standard sge and extended sge.
There are 2 standard sge in RC/XRC, and the UD standard sge is 0.
In the scenario of RC SQ inline, if the data does not exceed 32bytes,
the standard sge will be used. If it exceeds, only the extended sge will
be used to fill the data. Currently, when filling the extended sge,
max_gs is directly used as the number of the extended sge,
which did not subtract the number of standard sge.
There is a logical error. The new algorithm subtracts the number of
standard sge from max_gs to get the actual number of extended sge.
Fixes: 30b707886aeb ("RDMA/hns: Support inline data in extented sge space for RC")
Signed-off-by: Luoyouming <luoyouming@...wei.com>
Signed-off-by: Haoyue Xu <xuhaoyue1@...ilicon.com>
---
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 1435fe2ea176..0937db738be7 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -187,20 +187,29 @@ static void set_atomic_seg(const struct ib_send_wr *wr,
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SGE_NUM, valid_num_sge);
}
+static unsigned int get_std_sge_num(struct hns_roce_qp *qp)
+{
+ if (qp->ibqp.qp_type == IB_QPT_GSI || qp->ibqp.qp_type == IB_QPT_UD)
+ return 0;
+
+ return HNS_ROCE_SGE_IN_WQE;
+}
+
static int fill_ext_sge_inl_data(struct hns_roce_qp *qp,
const struct ib_send_wr *wr,
unsigned int *sge_idx, u32 msg_len)
{
struct ib_device *ibdev = &(to_hr_dev(qp->ibqp.device))->ib_dev;
- unsigned int ext_sge_sz = qp->sq.max_gs * HNS_ROCE_SGE_SIZE;
unsigned int left_len_in_pg;
unsigned int idx = *sge_idx;
+ unsigned int std_sge_num;
unsigned int i = 0;
unsigned int len;
void *addr;
void *dseg;
- if (msg_len > ext_sge_sz) {
+ std_sge_num = get_std_sge_num(qp);
+ if (msg_len > (qp->sq.max_gs - std_sge_num) * HNS_ROCE_SGE_SIZE) {
ibdev_err(ibdev,
"no enough extended sge space for inline data.\n");
return -EINVAL;
--
2.30.0
Powered by blists - more mailing lists