lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F01615E.1000301@redhat.com>
Date:	Mon, 02 Jan 2012 09:48:46 +0200
From:	Dor Laor <dlaor@...hat.com>
To:	Stefan Hajnoczi <stefanha@...il.com>
CC:	Minchan Kim <minchan@...nel.org>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Chris Wright <chrisw@...s-sol.org>,
	Jens Axboe <axboe@...nel.dk>,
	Stefan Hajnoczi <stefanha@...ux.vnet.ibm.com>,
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
	Christoph Hellwig <hch@...radead.org>,
	Vivek Goyal <vgoyal@...hat.com>
Subject: Re: [PATCH 0/6][RFC] virtio-blk: Change I/O path from request to
 BIO

On 01/01/2012 06:45 PM, Stefan Hajnoczi wrote:
> On Thu, Dec 22, 2011 at 11:41 PM, Minchan Kim<minchan@...nel.org>  wrote:
>> On Thu, Dec 22, 2011 at 12:57:40PM +0000, Stefan Hajnoczi wrote:
>>> On Wed, Dec 21, 2011 at 1:00 AM, Minchan Kim<minchan@...nel.org>  wrote:
>>> If you're stumped by the performance perhaps compare blktraces of the
>>> request approach vs the bio approach.  We're probably performing I/O
>>> more CPU-efficiently but the I/O pattern itself is worse.
>>
>> You mean I/O scheduler have many techniques to do well in I/O pattern?
>> That's what I want to discuss in this RFC.
>>
>> I guess request layer have many techniques proved during long time
>> to do well I/O but BIO-based drvier ignores them for just reducing locking
>> overhead. Of course, we can add such techniques to BIO-batch driver like
>> custom-batch in this series. But it needs lots of work, is really duplication,
>> and will have a problem on maintenance.
>>
>> I would like to listen opinions whether this direction is good or bad.
>
> This series is a good platform for performance analysis but not
> something that should be merged IMO.  As you said it duplicates work
> that I/O schedulers and the request-based block layer do.  If other
> drivers start taking this approach too then the duplication will be
> proliferated.
>
> The value of this series is that you have a prototype to benchmark and
> understand the bottlenecks in virtio-blk and the block layer better.
> The results do not should that bypassing the I/O scheduler is always a
> win.  The fact that you added batching suggests there is some benefit
> to what the request-based code path does.  So find out what's good
> about the request-based code path and how to get the best of both
> worlds.
>
> By the way, drivers for solid-state devices can set QUEUE_FLAG_NONROT
> to hint that seek time optimizations may be sub-optimal.  NBD and
> other virtual/pseudo device drivers set this flag.  Should virtio-blk
> set it and how does it affect performance?

Seems logical to me. If the underlying backing storage of the host is 
SSD or some remote fast SAN server we need such a flag. Even in the case 
of standard local storage, the host will still do the seek time 
optimization so no need to do them twice.

>
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ