[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56538AFD.9080103@sandisk.com>
Date: Mon, 23 Nov 2015 13:54:05 -0800
From: Bart Van Assche <bart.vanassche@...disk.com>
To: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
CC: Christoph Hellwig <hch@....de>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"sagig@....mellanox.co.il" <sagig@....mellanox.co.il>,
"axboe@...com" <axboe@...com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/9] IB: add a proper completion queue abstraction
On 11/23/2015 01:28 PM, Jason Gunthorpe wrote:
> On Mon, Nov 23, 2015 at 01:04:25PM -0800, Bart Van Assche wrote:
>
>> Considerable time ago the send queue in the SRP initiator driver was
>> modified from signaled to non-signaled to reduce the number of interrupts
>> triggered by the SRP initiator driver. The SRP initiator driver polls the
>> send queue every time before a SCSI command is sent to the target. I think
>> this is a pattern that is also useful for other ULP's so I'm not convinced
>> that ib_process_cq_direct() should be deprecated :-)
>
> As I explained, that is a fine idea, but I can't see how SRP is able
> to correctly do sendq flow control without spinning on the poll, which
> it does not do.
>
> I'm guessing SRP is trying to drive sendq flow control from the recv
> side, like NFS was. This is wrong and should not be part of the common
> API.
>
> Does that make sense?
Not really ... Please have a look at the SRP initiator source code. What
the SRP initiator does is to poll the send queue before sending a new
SCSI command to the target system starts. I think this approach could
also be used in other ULP drivers if the send queue poll frequency is
such that no send queue overflow occurs.
Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists