[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMGffEn-bOgELLb1rTg9W+f2Hqd6A46T1rkDZegKov8TrAkDxA@mail.gmail.com>
Date: Tue, 27 Oct 2020 08:33:45 +0100
From: Jinpu Wang <jinpu.wang@...ud.ionos.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Danil Kipnis <danil.kipnis@...ud.ionos.com>,
Doug Ledford <dledford@...hat.com>,
Christoph Hellwig <hch@....de>,
Keith Busch <kbusch@...nel.org>,
linux-nvme@...ts.infradead.org, linux-rdma@...r.kernel.org,
Max Gurtovoy <mgurtovoy@...dia.com>,
netdev <netdev@...r.kernel.org>, rds-devel@....oracle.com,
Sagi Grimberg <sagi@...mberg.me>,
Santosh Shilimkar <santosh.shilimkar@...cle.com>,
Guoqing Jiang <guoqing.jiang@...ud.ionos.com>,
Leon Romanovsky <leonro@...dia.com>
Subject: Re: [PATCH] RDMA: Add rdma_connect_locked()
On Mon, Oct 26, 2020 at 3:25 PM Jason Gunthorpe <jgg@...dia.com> wrote:
>
> There are two flows for handling RDMA_CM_EVENT_ROUTE_RESOLVED, either the
> handler triggers a completion and another thread does rdma_connect() or
> the handler directly calls rdma_connect().
>
> In all cases rdma_connect() needs to hold the handler_mutex, but when
> handler's are invoked this is already held by the core code. This causes
> ULPs using the 2nd method to deadlock.
>
> Provide a rdma_connect_locked() and have all ULPs call it from their
> handlers.
>
> Reported-by: Guoqing Jiang <guoqing.jiang@...ud.ionos.com>
> Fixes: 2a7cec538169 ("RDMA/cma: Fix locking for the RDMA_CM_CONNECT state"
> Signed-off-by: Jason Gunthorpe <jgg@...dia.com>
> ---
> drivers/infiniband/core/cma.c | 39 +++++++++++++++++++++---
> drivers/infiniband/ulp/iser/iser_verbs.c | 2 +-
> drivers/infiniband/ulp/rtrs/rtrs-clt.c | 4 +--
> drivers/nvme/host/rdma.c | 10 +++---
> include/rdma/rdma_cm.h | 13 +-------
> net/rds/ib_cm.c | 5 +--
> 6 files changed, 47 insertions(+), 26 deletions(-)
>
> Seems people are not testing these four ULPs against rdma-next.. Here is a
> quick fix for the issue:
>
> https://lore.kernel.org/r/3b1f7767-98e2-93e0-b718-16d1c5346140@cloud.ionos.com
>
> Jason
>
> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> index 7c2ab1f2fbea37..2eaaa1292fb847 100644
> --- a/drivers/infiniband/core/cma.c
> +++ b/drivers/infiniband/core/cma.c
> @@ -405,10 +405,10 @@ static int cma_comp_exch(struct rdma_id_private *id_priv,
> /*
> * The FSM uses a funny double locking where state is protected by both
> * the handler_mutex and the spinlock. State is not allowed to change
> - * away from a handler_mutex protected value without also holding
> + * to/from a handler_mutex protected value without also holding
> * handler_mutex.
> */
> - if (comp == RDMA_CM_CONNECT)
> + if (comp == RDMA_CM_CONNECT || exch == RDMA_CM_CONNECT)
> lockdep_assert_held(&id_priv->handler_mutex);
>
> spin_lock_irqsave(&id_priv->lock, flags);
> @@ -4038,13 +4038,20 @@ static int cma_connect_iw(struct rdma_id_private *id_priv,
> return ret;
> }
>
> -int rdma_connect(struct rdma_cm_id *id, struct rdma_conn_param *conn_param)
> +/**
> + * rdma_connect_locked - Initiate an active connection request.
> + * @id: Connection identifier to connect.
> + * @conn_param: Connection information used for connected QPs.
> + *
> + * Same as rdma_connect() but can only be called from the
> + * RDMA_CM_EVENT_ROUTE_RESOLVED handler callback.
> + */
> +int rdma_connect_locked(struct rdma_cm_id *id, struct rdma_conn_param *conn_param)
> {
> struct rdma_id_private *id_priv =
> container_of(id, struct rdma_id_private, id);
> int ret;
>
> - mutex_lock(&id_priv->handler_mutex);
> if (!cma_comp_exch(id_priv, RDMA_CM_ROUTE_RESOLVED, RDMA_CM_CONNECT)) {
> ret = -EINVAL;
> goto err_unlock;
> @@ -4071,6 +4078,30 @@ int rdma_connect(struct rdma_cm_id *id, struct rdma_conn_param *conn_param)
> err_state:
> cma_comp_exch(id_priv, RDMA_CM_CONNECT, RDMA_CM_ROUTE_RESOLVED);
> err_unlock:
> + return ret;
> +}
> +EXPORT_SYMBOL(rdma_connect_locked);
> +
> +/**
> + * rdma_connect - Initiate an active connection request.
> + * @id: Connection identifier to connect.
> + * @conn_param: Connection information used for connected QPs.
> + *
> + * Users must have resolved a route for the rdma_cm_id to connect with by having
> + * called rdma_resolve_route before calling this routine.
> + *
> + * This call will either connect to a remote QP or obtain remote QP information
> + * for unconnected rdma_cm_id's. The actual operation is based on the
> + * rdma_cm_id's port space.
> + */
> +int rdma_connect(struct rdma_cm_id *id, struct rdma_conn_param *conn_param)
> +{
> + struct rdma_id_private *id_priv =
> + container_of(id, struct rdma_id_private, id);
> + int ret;
> +
> + mutex_lock(&id_priv->handler_mutex);
> + ret = rdma_connect_locked(id, conn_param);
> mutex_unlock(&id_priv->handler_mutex);
> return ret;
> }
> diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c
> index 2f3ebc0a75d924..2bd18b00689341 100644
> --- a/drivers/infiniband/ulp/iser/iser_verbs.c
> +++ b/drivers/infiniband/ulp/iser/iser_verbs.c
> @@ -620,7 +620,7 @@ static void iser_route_handler(struct rdma_cm_id *cma_id)
> conn_param.private_data = (void *)&req_hdr;
> conn_param.private_data_len = sizeof(struct iser_cm_hdr);
>
> - ret = rdma_connect(cma_id, &conn_param);
> + ret = rdma_connect_locked(cma_id, &conn_param);
> if (ret) {
> iser_err("failure connecting: %d\n", ret);
> goto failure;
> diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
> index 776e89231c52f7..f298adc02acba2 100644
> --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
> +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
> @@ -1674,9 +1674,9 @@ static int rtrs_rdma_route_resolved(struct rtrs_clt_con *con)
> uuid_copy(&msg.sess_uuid, &sess->s.uuid);
> uuid_copy(&msg.paths_uuid, &clt->paths_uuid);
>
> - err = rdma_connect(con->c.cm_id, ¶m);
> + err = rdma_connect_locked(con->c.cm_id, ¶m);
> if (err)
> - rtrs_err(clt, "rdma_connect(): %d\n", err);
> + rtrs_err(clt, "rdma_connect_locked(): %d\n", err);
>
> return err;
> }
For rtrs, looks good to me!
Thanks for the quick fix.
Acked-by: Jack Wang <jinpu.wang@...ud.ionos.com>
Powered by blists - more mailing lists