lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250715191629.GA2116306@ziepe.ca>
Date: Tue, 15 Jul 2025 16:16:29 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: Leon Romanovsky <leon@...nel.org>
Cc: Abhijit Gangurde <abhijit.gangurde@....com>, shannon.nelson@....com,
	brett.creeley@....com, davem@...emloft.net, edumazet@...gle.com,
	kuba@...nel.org, pabeni@...hat.com, corbet@....net,
	andrew+netdev@...n.ch, allen.hubbe@....com, nikhil.agarwal@....com,
	linux-rdma@...r.kernel.org, netdev@...r.kernel.org,
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
	Andrew Boyer <andrew.boyer@....com>
Subject: Re: [PATCH v3 10/14] RDMA/ionic: Register device ops for control path

On Sun, Jul 13, 2025 at 09:27:53AM +0300, Leon Romanovsky wrote:
> Let's do what all other drivers do, please. I prefer simplest solution
> and objects that can potentially be around after verbs objects were
> cleaned doesn't sound right.

I think it is OK, at least QP makes sense and matches some other
drivers.

+static void ionic_qp_event(struct ionic_ibdev *dev, u32 qpid, u8 code)
+{
+       struct ib_event ibev;
+       struct ionic_qp *qp;
+
+       rcu_read_lock();
+       qp = xa_load(&dev->qp_tbl, qpid);
+       if (qp)
+               kref_get(&qp->qp_kref);
+       rcu_read_unlock();
+

The above is an async event path, and the kref is effectively the open
coded rwlock pattern we use often.

The unlock triggers a completion:

+       kref_put(&qp->qp_kref, ionic_qp_complete);
+static inline void ionic_qp_complete(struct kref *kref)
+{
+       struct ionic_qp *qp = container_of(kref, struct ionic_qp, qp_kref);
+       
+       complete(&qp->qp_rel_comp);
+}

Which acts as the unlock. And then qp destruction:

+int ionic_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+{
+       kref_put(&qp->qp_kref, ionic_qp_complete);
+       wait_for_completion(&qp->qp_rel_comp);

Which is the typical "write" side of the lock.

So this is all normal, the qp doesn't outlive destroy, destroy waits
for all the async event deliver to complete. It has to, we free the
underlying memory in the core code.

As long as the other case are like this it is fine

+       xa_erase_irq(&dev->qp_tbl, qp->qpid);
+       synchronize_rcu();

This should go away though, don't like to see synchronize_rcu(). The
idea is you kfree the QP with RCU. But the core code doesn't do that..

So in the short term you should take the lock instead of using rcu:

       xa_lock(&dev->qp_tbl);
       qp = xa_load(&dev->qp_tbl, qpid);
       if (qp)
               kref_get(&qp->qp_kref);

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ