lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111222154506.GC1388@redhat.com>
Date:	Thu, 22 Dec 2011 10:45:06 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Rusty Russell <rusty@...tcorp.com.au>,
	Chris Wright <chrisw@...s-sol.org>,
	Jens Axboe <axboe@...nel.dk>,
	Stefan Hajnoczi <stefanha@...ux.vnet.ibm.com>,
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
	Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH 0/6][RFC] virtio-blk: Change I/O path from request to BIO

On Thu, Dec 22, 2011 at 10:05:38AM +0900, Minchan Kim wrote:

[..]
> > May be using deadline or noop in guest is better to benchmark against
> > PCI-E based flash.
> 
> Good suggestion.
> I tested it by deadline on guest side.
> 
> The result is not good.
> Although gap is within noise, Batch BIO's random performance is regressed
> compared to CFQ. 
> 
>             Request                Batch BIO
> 
>          (MB/s)  stddev          (MB/s)  stddev
> w        787.030 31.494 w        748.714 68.490
> rw       216.044 29.734 rw       216.977 40.635
> r        771.765 3.327  r        771.107 4.299
> rr       280.096 25.135 rr       258.067 43.916
> 
> I did some small test for only Batch BIO with deadline and cfq.
> to see I/O scheduler's effect.
> I think result is very strange, deadline :149MB, CFQ : 87M
> Because Batio BIO patch uses make_request_fn instead of request_rfn.
> So I think we should not affect by I/O scheduler.(I mean we issue I/O 
> before I/O scheduler handles it)
> 
> What do you think about it?
> Do I miss something?

This indeed is very strange. In case of bio based drivers, changing IO
scheduler on the queue should not change anything.

Trying running blktrace on the vda devie and see if you notice something
odd.

Also you seem to be reporting contracdicting results for batch bio.

Initially you mention that random IO is regressing with deadline as
comapred to CFQ. (It dropped from 325.976 MB/s to 258.067 MB/s).

In this second test you are reporting that CFQ performs badly as
compared to deadline. (149MB/s vs 87MB/s).

Two contradicting results?

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ