[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d2b0f6d-6214-8d04-8356-852c34563501@fujitsu.com>
Date: Mon, 5 Dec 2022 10:07:11 +0000
From: "lizhijian@...itsu.com" <lizhijian@...itsu.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: "zyjzyj2000@...il.com" <zyjzyj2000@...il.com>,
"leon@...nel.org" <leon@...nel.org>,
Bob Pearson <rpearsonhpe@...il.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Mark Bloch <mbloch@...dia.com>, Tom Talpey <tom@...pey.com>,
"tomasz.gromadzki@...el.com" <tomasz.gromadzki@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
"yangx.jy@...itsu.com" <yangx.jy@...itsu.com>,
"Yasunori Gotou (Fujitsu)" <y-goto@...itsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [for-next PATCH v6 09/10] RDMA/cm: Make QP FLUSHABLE
Jason
Could you take a look at this update:
- Make QP FLUSHABLE for supported device only
- add more explanation
commit 1a95f94125b59183d5fc643a917e0a2e7bb07981
Author: Li Zhijian <lizhijian@...itsu.com>
Date: Mon Sep 26 17:53:06 2022 +0800
RDMA/cm: Make QP FLUSHABLE for supported device
Similar to RDMA and Atomic qp attributes enabled by default in CM, enable
FLUSH attribute for supported device. That makes applications that are
built with rdma_create_ep, rdma_accept APIs have FLUSH qp attribute
natively so that user is able to request FLUSH operation simpler.
Note that, a FLUSH operation requires FLUSH are supported by both
device(HCA) and memory region(MR) and QP at the same time, so it's safe
to enable FLUSH qp attribute by default here.
FLUSH attribute can be disable by modify_qp() interface.
Signed-off-by: Li Zhijian <lizhijian@...itsu.com>
---
V6: enable flush for supported device only #Jason
V5: new patch, inspired by Bob
Signed-off-by: Li Zhijian <lizhijian@...itsu.com>
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 1f9938a2c475..603c0aecc361 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -4094,9 +4094,18 @@ static int cm_init_qp_init_attr(struct cm_id_private *cm_id_priv,
*qp_attr_mask = IB_QP_STATE | IB_QP_ACCESS_FLAGS |
IB_QP_PKEY_INDEX | IB_QP_PORT;
qp_attr->qp_access_flags = IB_ACCESS_REMOTE_WRITE;
- if (cm_id_priv->responder_resources)
+ if (cm_id_priv->responder_resources) {
+ struct ib_device *ib_dev = cm_id_priv->id.device;
+ u64 support_flush = ib_dev->attrs.device_cap_flags &
+ (IB_DEVICE_FLUSH_GLOBAL | IB_DEVICE_FLUSH_PERSISTENT);
+ u32 flushable = support_flush ?
+ (IB_ACCESS_FLUSH_GLOBAL |
+ IB_ACCESS_FLUSH_PERSISTENT) : 0;
+
qp_attr->qp_access_flags |= IB_ACCESS_REMOTE_READ |
- IB_ACCESS_REMOTE_ATOMIC;
+ IB_ACCESS_REMOTE_ATOMIC |
+ flushable;
+ }
qp_attr->pkey_index = cm_id_priv->av.pkey_index;
if (cm_id_priv->av.port)
qp_attr->port_num = cm_id_priv->av.port->port_num;
Thanks
Zhijian
On 28/11/2022 18:23, lizhijian@...itsu.com wrote:
>
>
> On 25/11/2022 22:16, Jason Gunthorpe wrote:
>>>>>>> ---
>>>>>>> drivers/infiniband/core/cm.c | 3 ++-
>>>>>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
>>>>>>> index 1f9938a2c475..58837aac980b 100644
>>>>>>> --- a/drivers/infiniband/core/cm.c
>>>>>>> +++ b/drivers/infiniband/core/cm.c
>>>>>>> @@ -4096,7 +4096,8 @@ static int cm_init_qp_init_attr(struct cm_id_private *cm_id_priv,
>>>>>>> qp_attr->qp_access_flags = IB_ACCESS_REMOTE_WRITE;
>>>>>>> if (cm_id_priv->responder_resources)
>>>>>>> qp_attr->qp_access_flags |= IB_ACCESS_REMOTE_READ |
>>>>>>> - IB_ACCESS_REMOTE_ATOMIC;
>>>>>>> + IB_ACCESS_REMOTE_ATOMIC |
>>>>>>> + IB_ACCESS_FLUSHABLE;
>>>>>> What is the point of this? Nothing checks IB_ACCESS_FLUSHABLE ?
>
>>> I'm fine to expand it in next version.
>> OIC, that is why it escaped grep
>>
>> But this is back to my original question - why is it OK to do this
>> here in CMA? Shouldn't this cause other drivers to refuse to create
>> the QP because they don't support the flag?
>>
>
> Jason,
>
> My flush example got wrong since V5 where responder side does
> qp_access_flags check. so i added this patch.
>
> I also agreed it's a good idea that we should only add this flush flag
> to the supported drivers. But i haven't figured out how to achieve it in
> current RDMA.
>
> After more digging into rdma-core, i found that this flag can be also
> set from usespace by calling ibv_modify_qp().
> For server side(responder), ibv_modify_qp() must be called after
> rdma_accept(). rdma_accept() inside will modify qp_access_flags
> again(rdma_get_request is the first place to modify qp_access_flags).
>
> Back to the original question, IIUC, current rdma-core have no API to
> set qp_access_flags during qp creating.
>
> FLUSH operation in responder side will check both mr->access_flags and
> qp_access_flags. So unsupported device/driver are not able to do flush
> at all though qp_access_flags apply to all drivers.
>
>
> ------------------------------------------
> (gdb) bt
> #0 __ibv_modify_qp_1_1 (qp=0x40fbf0, attr=0x7fffffffd920, attr_mask=57)
> at /home/lizhijian/rdma-core/libibverbs/verbs.c:715
> #1 0x00007ffff7faa1db in ucma_init_conn_qp (id_priv=0x40f6d0, qp=0x40fbf0)
> at /home/lizhijian/rdma-core/librdmacm/cma.c:1380
> #2 0x00007ffff7faada2 in rdma_create_qp_ex (id=0x40f6d0,
> attr=0x7fffffffda30)
> at /home/lizhijian/rdma-core/librdmacm/cma.c:1676
> #3 0x00007ffff7faae94 in rdma_create_qp (id=0x40f6d0, pd=0x407710,
> qp_init_attr=0x7fffffffdae0)
> at /home/lizhijian/rdma-core/librdmacm/cma.c:1702
> #4 0x00007ffff7fab5d3 in rdma_get_request (listen=0x40ede0, id=0x4051a8
> <id>)
> at /home/lizhijian/rdma-core/librdmacm/cma.c:1883
> #5 0x0000000000401af9 in run () at
> /home/lizhijian/rdma-core/librdmacm/examples/rdma_flush_server.c:91
> #6 0x00000000004021ea in main (argc=7, argv=0x7fffffffe228)
> at /home/lizhijian/rdma-core/librdmacm/examples/rdma_flush_server.c:282
>
> (gdb) bt
> #0 __ibv_modify_qp_1_1 (qp=0x40fbf0, attr=0x7fffffffd930,
> attr_mask=1216897)
> at /home/lizhijian/rdma-core/libibverbs/verbs.c:715
> #1 0x00007ffff7fa9f16 in ucma_modify_qp_rtr (id=0x40f6d0, resp_res=128
> '\200')
> at /home/lizhijian/rdma-core/librdmacm/cma.c:1304
> #2 0x00007ffff7fab80a in rdma_accept (id=0x40f6d0, conn_param=0x0)
> at /home/lizhijian/rdma-core/librdmacm/cma.c:1932
> #3 0x0000000000401c8a in run () at
> /home/lizhijian/rdma-core/librdmacm/examples/rdma_flush_server.c:132
> #4 0x00000000004021ea in main (argc=7, argv=0x7fffffffe228)
> at /home/lizhijian/rdma-core/librdmacm/examples/rdma_flush_server.c:282
>
>
>> Jason
Powered by blists - more mailing lists