lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Jul 2014 15:48:10 +0200
From:	Jens Axboe <>
To:	Benjamin LaHaise <>
CC:	Christoph Hellwig <>,
	"Elliott, Robert (Server Storage)" <>,
	"" <>,
	James Bottomley <>,
	Bart Van Assche <>,
	"" <>,
	"" <>
Subject: Re: scsi-mq V2

On 2014-07-10 15:44, Benjamin LaHaise wrote:
> On Thu, Jul 10, 2014 at 03:39:57PM +0200, Jens Axboe wrote:
>> That's how fio always runs, it sets up the context with the exact queue
>> depth that it needs. Do we have a good enough understanding of other aio
>> use cases to say that this isn't the norm? I would expect it to be, it's
>> the way that the API would most obviously be used.
> The problem with this approach is that it works very poorly with per cpu
> reference counting's batching of references, which is pretty much a
> requirement now that many core systems are the norm.  Allocating the bare
> minimum is not the right thing to do today.  That said, the default limits
> on the number of requests probably needs to be raised.

Sorry, that's a complete cop-out. Then you handle this internally, 
allocate a bigger pool and cap the limit if you need to. Look at the 
API. You pass in the number of requests you will use. Do you expect 
anyone to double up, just in case? Will never happen.

But all of this is side stepping the point that there's a real bug 
reported here. The above could potentially explain the "it's using X 
more CPU, or it's Y slower". The above is a softlock, it never completes.

Jens Axboe

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists