lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <564C41B8.2080904@dev.mellanox.co.il>
Date:	Wed, 18 Nov 2015 11:15:36 +0200
From:	Sagi Grimberg <sagig@....mellanox.co.il>
To:	Bart Van Assche <bart.vanassche@...disk.com>,
	Christoph Hellwig <hch@....de>, linux-rdma@...r.kernel.org
Cc:	axboe@...com, linux-scsi@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/9] srpt: chain RDMA READ/WRITE requests



On 18/11/2015 03:17, Bart Van Assche wrote:
> On 11/13/2015 05:46 AM, Christoph Hellwig wrote:
>> -        ret = ib_post_send(ch->qp, &wr.wr, &bad_wr);
>> -        if (ret)
>> -            break;
>> +        if (i == n_rdma - 1) {
>> +            /* only get completion event for the last rdma read */
>> +            if (dir == DMA_TO_DEVICE)
>> +                wr->wr.send_flags = IB_SEND_SIGNALED;
>> +            wr->wr.next = NULL;
>> +        } else {
>> +            wr->wr.next = &ioctx->rdma_ius[i + 1].wr;
>> +        }
>>       }
>>
>> +    ret = ib_post_send(ch->qp, &ioctx->rdma_ius->wr, &bad_wr);
>>       if (ret)
>>           pr_err("%s[%d]: ib_post_send() returned %d for %d/%d\n",
>>                    __func__, __LINE__, ret, i, n_rdma);
>
> Hello Christoph,

Hi Bart,

>
> Chaining RDMA requests is a great idea. But it seems to me that this
> patch is based on the assumption that posting multiple RDMA requests
> either succeeds as a whole or fails as a whole. Sorry but I'm not sure
> that the verbs API guarantees this. In the ib_srpt driver a QP can be
> changed at any time into the error state and there might be drivers that
> report an immediate failure in that case.

I'm not so sure it actually matters if some WRs succeeded. In the normal
flow when srpt has enough available work requests (sq_wr_avail) they
should all succeed otherwise we're in trouble. If the QP transitioned
to ERROR state, then some failed, but those that succeeded will
generate flush completions, and srpt should handle it correctly
shouldn't it?

> I think even when chaining
> RDMA requests that we still need a mechanism to wait until ongoing RDMA
> transfers have finished if some but not all RDMA requests have been posted.

I'm not an expert on srpt, can you explain how this mechanism will help?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ