[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53BE999A.6060309@kernel.dk>
Date: Thu, 10 Jul 2014 15:48:10 +0200
From: Jens Axboe <axboe@...nel.dk>
To: Benjamin LaHaise <bcrl@...ck.org>
CC: Christoph Hellwig <hch@...radead.org>,
"Elliott, Robert (Server Storage)" <Elliott@...com>,
"dgilbert@...erlog.com" <dgilbert@...erlog.com>,
James Bottomley <James.Bottomley@...senPartnership.com>,
Bart Van Assche <bvanassche@...ionio.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: scsi-mq V2
On 2014-07-10 15:44, Benjamin LaHaise wrote:
> On Thu, Jul 10, 2014 at 03:39:57PM +0200, Jens Axboe wrote:
>> That's how fio always runs, it sets up the context with the exact queue
>> depth that it needs. Do we have a good enough understanding of other aio
>> use cases to say that this isn't the norm? I would expect it to be, it's
>> the way that the API would most obviously be used.
>
> The problem with this approach is that it works very poorly with per cpu
> reference counting's batching of references, which is pretty much a
> requirement now that many core systems are the norm. Allocating the bare
> minimum is not the right thing to do today. That said, the default limits
> on the number of requests probably needs to be raised.
Sorry, that's a complete cop-out. Then you handle this internally,
allocate a bigger pool and cap the limit if you need to. Look at the
API. You pass in the number of requests you will use. Do you expect
anyone to double up, just in case? Will never happen.
But all of this is side stepping the point that there's a real bug
reported here. The above could potentially explain the "it's using X
more CPU, or it's Y slower". The above is a softlock, it never completes.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists