[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMGffEkyNnSXDfwuzCQ_ERZ-53OnoJ7gOF4eL1MAPYc74V43iQ@mail.gmail.com>
Date: Wed, 19 Apr 2023 15:20:32 +0200
From: Jinpu Wang <jinpu.wang@...os.com>
To: "Zhijian Li (Fujitsu)" <lizhijian@...itsu.com>
Cc: Leon Romanovsky <leon@...nel.org>,
Zhu Yanjun <yanjun.zhu@...ux.dev>,
Guoqing Jiang <guoqing.jiang@...ux.dev>,
"haris.iqbal@...os.com" <haris.iqbal@...os.com>,
"jgg@...pe.ca" <jgg@...pe.ca>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH for-next 2/3] RDMA/rtrs: Fix rxe_dealloc_pd warning
On Wed, Apr 19, 2023 at 11:53 AM Zhijian Li (Fujitsu)
<lizhijian@...itsu.com> wrote:
>
> Leon, Guoqing
>
>
> On 18/04/2023 15:57, Leon Romanovsky wrote:
> >>>> Currently, without this patch:
> >>>> 1. PD and clt_path->s.dev are shared among connections.
> >>>> 2. every con[n]'s cleanup phase will call destroy_con_cq_qp()
> >>>> 3. clt_path->s.dev will be always decreased in destroy_con_cq_qp(), and when
> >>>> clt_path->s.dev become zero, it will destroy PD.
> >>>> 4. when con[1] failed to create, con[1] will not take clt_path->s.dev, but it try to decreased clt_path->s.dev <<< it's wrong to do that.
> >>> So please fix it by making sure that failure to create con[1] will
> >>> release resources which were allocated. If con[1] didn't increase
> >>> s.dev_ref, it shouldn't decrease it either.
> >> You are right, the current patch did exactly that.
> >> It introduced a con owning flag 'has_dev' to indicate whether this con has taken s.dev.
> >> so that its cleanup phase will only decrease its s.dev properly.
> > The has_dev is a workaround and not a solution. In proper error unwind
> > sequence, you won't need extra flag.
> >
> > Thanks
> >
>
> how about below changes
>
> commit 61dba725384e226d472b8142d70d40d4103df87a
> Author: Li Zhijian <lizhijian@...itsu.com>
> Date: Wed Apr 19 17:42:26 2023 +0800
>
> RDMA/rtrs: Fix rxe_dealloc_pd warning
>
> con[0] always sets s.dev to 1, correspondingly, we should let it to
> release the last dev.
>
> Previously,
> 1. PD and clt_path->s.dev are shared among connections.
> 2. every con[n]'s cleanup phase will call destroy_con_cq_qp()
> 3. clt_path->s.dev will be always decreased in destroy_con_cq_qp(), and when
> clt_path->s.dev become zero, it will destroy PD.
> 4. when con[1] failed to create, con[1] will not take clt_path->s.dev,
> but it try to decreased clt_path->s.dev <<< it's wrong to do that.
>
> The warning occurs when destroying PD whose reference count is not zero.
> Precodition: clt_path->s.con_num is 2.
> So 2 cm connection will be created as below:
> CPU0 CPU1
> init_conns { |
> create_cm() // a. con[0] created |
> | a'. rtrs_clt_rdma_cm_handler() {
> | rtrs_rdma_addr_resolved()
> | create_con_cq_qp(con); << con[0]
> | }
> | in this moment, refcnt of PD was increased to 2+
> |
> create_cm() // b. cid = 1, failed |
> destroy_con_cq_qp() |
> rtrs_ib_dev_put() |
> dev_free() |
> ib_dealloc_pd(dev->ib_pd) << PD |
> is destroyed, but refcnt is |
> still greater than 0 |
> }
>
> diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
> index 80abf45a197a..1eb652dedca3 100644
> --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
> +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
> @@ -1743,6 +1743,15 @@ static void destroy_con_cq_qp(struct rtrs_clt_con *con)
> con->rsp_ius = NULL;
> con->queue_num = 0;
> }
> +
> + /*
> + * Every con will try to decreased s.dev_ref, but we should
> + * reserve the last s.dev_ref for con[0]. In case con[1+]'s
> + * cleanup phase call rtrs_ib_dev_put(clt_path->s.dev) early.
> + */
> + if (con->c.cid != 0 && clt_path->s.dev_ref == 1)
> + return;
> +
> if (clt_path->s.dev_ref && !--clt_path->s.dev_ref) {
> rtrs_ib_dev_put(clt_path->s.dev);
> clt_path->s.dev = NULL;
I run a regression test in our test env, it triggers a warning on
1681 if (WARN_ON(clt_path->s.dev))
[ 1333.042633] ------------[ cut here ]------------
[ 1333.042650] WARNING: CPU: 8 PID: 559 at
/root/kernel-test/ibnbd2/rtrs/rtrs-clt.c:1681
rtrs_clt_rdma_cm_handler+0x864/0x8a0 [rtrs_client]
[ 1333.042651] Modules linked in: loop rnbd_client(O) rtrs_client(O)
rtrs_core(O) kvm_amd kvm input_leds led_class irqbypass crc32_pclmul
aesni_intel sp5100_tco evdev libaes watchdog sg k10temp crypto_simd
fam15h_power ipmi_si serio_raw cryptd ipmi_devintf glue_helper
ipmi_msghandler acpi_cpufreq button ib_ipoib ib_umad null_blk brd
rdma_cm iw_cm ib_cm ip_tables x_tables autofs4 raid10 raid456
async_raid6_recov async_memcpy async_pq async_xor async_tx xor
raid6_pq libcrc32c raid1 raid0 linear mlx4_ib md_mod ib_uverbs ib_core
sd_mod t10_pi crc_t10dif crct10dif_generic ahci libahci
crct10dif_pclmul crct10dif_common crc32c_intel igb libata usb_storage
psmouse i2c_piix4 i2c_algo_bit mlx4_core dca scsi_mod i2c_core ptp
pps_core
[ 1333.042737] CPU: 8 PID: 559 Comm: kworker/u128:1 Tainted: G
O 5.10.136-pserver-develop-5.10 #257
[ 1333.042738] Hardware name: Supermicro H8QG6/H8QG6, BIOS 3.00 09/04/2012
[ 1333.042752] Workqueue: rdma_cm cma_work_handler [rdma_cm]
[ 1333.042758] RIP: 0010:rtrs_clt_rdma_cm_handler+0x864/0x8a0 [rtrs_client]
[ 1333.042761] Code: ff bb ea ff ff ff e8 db a5 24 fc 49 8d b4 24 10
01 00 00 89 da 48 c7 c7 40 93 5b c0 e8 4b 47 21 fc 4d 8b 65 00 e9 15
fe ff ff <0f> 0b 4c 89 ff bb ea ff ff ff e8 ad a5 24 fc eb d0 0f 0b 4c
89 ff
[ 1333.042763] RSP: 0018:ffffaff68e57bdb0 EFLAGS: 00010286
[ 1333.042765] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff9eddc0051420
[ 1333.042767] RDX: ffff9ee4ef716e40 RSI: ffff9f14ea288f30 RDI: ffff9eddc88db240
[ 1333.042768] RBP: ffffaff68e57be50 R08: 0000000000000000 R09: 006d635f616d6472
[ 1333.042769] R10: ffffaff68e57be68 R11: 0000000000000000 R12: ffff9edde1388000
[ 1333.042771] R13: ffff9eddc88db200 R14: ffff9edde1388000 R15: ffff9eddc88db240
[ 1333.042773] FS: 0000000000000000(0000) GS:ffff9eecc7c00000(0000)
knlGS:0000000000000000
[ 1333.042774] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1333.042776] CR2: 00007f0ac4ed4004 CR3: 0000002b5040a000 CR4: 00000000000406e0
[ 1333.042777] Call Trace:
[ 1333.042790] ? newidle_balance+0x25e/0x3c0
[ 1333.042795] ? psi_group_change+0x43/0x230
[ 1333.042801] ? cma_cm_event_handler+0x23/0xb0 [rdma_cm]
[ 1333.042807] cma_cm_event_handler+0x23/0xb0 [rdma_cm]
[ 1333.042814] cma_work_handler+0x5a/0xb0 [rdma_cm]
[ 1333.042819] process_one_work+0x1f3/0x390
[ 1333.042822] worker_thread+0x2d/0x3c0
Powered by blists - more mailing lists