[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <564BD1AF.60200@sandisk.com>
Date: Tue, 17 Nov 2015 17:17:35 -0800
From: Bart Van Assche <bart.vanassche@...disk.com>
To: Christoph Hellwig <hch@....de>, <linux-rdma@...r.kernel.org>
CC: <sagig@....mellanox.co.il>, <bart.vanassche@...disk.com>,
<axboe@...com>, <linux-scsi@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/9] srpt: chain RDMA READ/WRITE requests
On 11/13/2015 05:46 AM, Christoph Hellwig wrote:
> - ret = ib_post_send(ch->qp, &wr.wr, &bad_wr);
> - if (ret)
> - break;
> + if (i == n_rdma - 1) {
> + /* only get completion event for the last rdma read */
> + if (dir == DMA_TO_DEVICE)
> + wr->wr.send_flags = IB_SEND_SIGNALED;
> + wr->wr.next = NULL;
> + } else {
> + wr->wr.next = &ioctx->rdma_ius[i + 1].wr;
> + }
> }
>
> + ret = ib_post_send(ch->qp, &ioctx->rdma_ius->wr, &bad_wr);
> if (ret)
> pr_err("%s[%d]: ib_post_send() returned %d for %d/%d\n",
> __func__, __LINE__, ret, i, n_rdma);
Hello Christoph,
Chaining RDMA requests is a great idea. But it seems to me that this
patch is based on the assumption that posting multiple RDMA requests
either succeeds as a whole or fails as a whole. Sorry but I'm not sure
that the verbs API guarantees this. In the ib_srpt driver a QP can be
changed at any time into the error state and there might be drivers that
report an immediate failure in that case. I think even when chaining
RDMA requests that we still need a mechanism to wait until ongoing RDMA
transfers have finished if some but not all RDMA requests have been posted.
Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists