lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131028085206.GB31270@infradead.org>
Date:	Mon, 28 Oct 2013 01:52:06 -0700
From:	Christoph Hellwig <hch@...radead.org>
To:	Rusty Russell <rusty@...tcorp.com.au>
Cc:	Jens Axboe <axboe@...nel.dk>, Asias He <asias@...hat.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] virtio_blk: blk-mq support

On Mon, Oct 28, 2013 at 01:17:54PM +1030, Rusty Russell wrote:
> Let's pretend I'm stupid.
> 
> We don't actually have multiple queues through to the host, but we're
> pretending to, because it makes the block layer go faster?
> 
> Do I want to know *why* it's faster?  Or should I look the other way?

You shouldn't.  To how multiple queues benefit here I'd like to defer to
Jens, given the single workqueue I don't really know where to look here.

The real benefit that unfortunately wasn't obvious from the description
is that even with just a single queue the blk-multiqueue infrastructure
will be a lot faster, because it is designed in a much more streaminline
fashion and avoids lots of lock roundtrips both during submission itself
and for submission vs complettion.  Back when I tried to get virtio-blk
to perform well on high-end flash (the work that Asias took over later)
the queue_lock contention was the major issue in virtio-blk and this
patch gets rid of that even with a single queue.

A good example are the patches from Nick to move scsi drivers over to
the infrastructure that only support a single queue.  Even that gave
over a 10 fold improvement over the old code.

Unfortunately I do not have access to this kind of hardware at the
moment, but I'd love to see if Asias or anyone at Red Hat could redo
those old numbers.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ