[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160614143132.GA17800@infradead.org>
Date: Tue, 14 Jun 2016 07:31:32 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Steve Wise <swise@...ngridcomputing.com>
Cc: 'Sagi Grimberg' <sagi@...htbits.io>,
'Christoph Hellwig' <hch@....de>, axboe@...nel.dk,
keith.busch@...el.com, 'Ming Lin' <ming.l@....samsung.com>,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org,
'Jay Freyensee' <james.p.freyensee@...el.com>,
'Armen Baloyan' <armenx.baloyan@...el.com>
Subject: Re: [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target
driver
On Thu, Jun 09, 2016 at 06:03:51PM -0500, Steve Wise wrote:
> The above nvmet cm event handler, nvmet_rdma_cm_handler(), calls
> nvmet_rdma_queue_connect() for CONNECT_REQUEST events, which calls
> nvmet_rdma_alloc_queue (), which, if it encounters a failure (like creating
> the qp), calls nvmet_rdma_cm_reject () which calls rdma_reject(). The
> non-zero error, however, gets returned back here and this function returns
> the error to the RDMA_CM which will also reject the connection as well as
> destroy the cm_id. So there are two rejects happening, I think. Either
> nvmet should reject and destroy the cm_id, or it should do neither and
> return non-zero to the RDMA_CM to reject/destroy.
Can you just send a patch?
Powered by blists - more mailing lists