lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <527029C9.2030102@kernel.dk>
Date:	Tue, 29 Oct 2013 15:34:01 -0600
From:	Jens Axboe <axboe@...nel.dk>
To:	Christoph Hellwig <hch@...radead.org>,
	Rusty Russell <rusty@...tcorp.com.au>
CC:	Asias He <asias@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] virtio_blk: blk-mq support

On 10/28/2013 02:52 AM, Christoph Hellwig wrote:
> On Mon, Oct 28, 2013 at 01:17:54PM +1030, Rusty Russell wrote:
>> Let's pretend I'm stupid.
>>
>> We don't actually have multiple queues through to the host, but we're
>> pretending to, because it makes the block layer go faster?
>>
>> Do I want to know *why* it's faster?  Or should I look the other way?
> 
> You shouldn't.  To how multiple queues benefit here I'd like to defer to
> Jens, given the single workqueue I don't really know where to look here.

The 4 was chosen to "have some number of multiple queues" and to be able
to exercise that part, no real performance testing was done by me after
the implementation to verify whether it was faster at 1, 2, 4, or
others. But it was useful for that! For merging, we can easily just make
it 1 since that's the most logical transformation. I can set some time
aside to play with multiple queues and see if we gain anything, but that
can be done post merge.

> The real benefit that unfortunately wasn't obvious from the description
> is that even with just a single queue the blk-multiqueue infrastructure
> will be a lot faster, because it is designed in a much more streaminline
> fashion and avoids lots of lock roundtrips both during submission itself
> and for submission vs complettion.  Back when I tried to get virtio-blk
> to perform well on high-end flash (the work that Asias took over later)
> the queue_lock contention was the major issue in virtio-blk and this
> patch gets rid of that even with a single queue.
> 
> A good example are the patches from Nick to move scsi drivers over to
> the infrastructure that only support a single queue.  Even that gave
> over a 10 fold improvement over the old code.
> 
> Unfortunately I do not have access to this kind of hardware at the
> moment, but I'd love to see if Asias or anyone at Red Hat could redo
> those old numbers.

I've got a variety of fast devices, so should be able to run that.
Asias, let me know what your position is, it'd be great to have
independent results.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ