lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090623072418.GE12483@p15145560.pureserver.info>
Date:	Tue, 23 Jun 2009 09:24:18 +0200
From:	Ralf Gross <Ralf-Lists@...fgross.de>
To:	linux-kernel@...r.kernel.org
Cc:	fengguang.wu@...el.com
Subject: Re: io-scheduler tuning for better read/write ratio

Jeff Moyer schrieb:
> Ralf Gross <rg@...-Softwaretechnik.com> writes:
> 
> > Jeff Moyer schrieb:
> >> Jeff Moyer <jmoyer@...hat.com> writes:
> >> 
> >> > Ralf Gross <rg@...-softwaretechnik.com> writes:
> >> >
> >> >> Casey Dahlin schrieb:
> >> >>> On 06/16/2009 02:40 PM, Ralf Gross wrote:
> >> >>> > David Newall schrieb:
> >> >>> >> Ralf Gross wrote:
> >> >>> >>> write throughput is much higher than the read throughput (40 MB/s
> >> >>> >>> read, 90 MB/s write).
> >> >>> > 
> >> >>> > Hm, but I get higher read throughput (160-200 MB/s) if I don't write
> >> >>> > to the device at the same time.
> >> >>> > 
> >> >>> > Ralf
> >> >>> 
> >> >>> How specifically are you testing? It could depend a lot on the
> >> >>> particular access patterns you're using to test.
> >> >>
> >> >> I did the basic tests with tiobench. The real test is a test backup
> >> >> (bacula) with 2 jobs that create 2 30 GB spool files on that device.
> >> >> The jobs partially write to the device in parallel. Depending which
> >> >> spool file reaches the 30 GB first, one starts reading from that file
> >> >> and writing to tape, while to other is still spooling.
> >> >
> >> > We are missing a lot of details, here.  I guess the first thing I'd try
> >> > would be bumping up the max_readahead_kb parameter, since I'm guessing
> >> > that your backup application isn't driving very deep queue depths.  If
> >> > that doesn't work, then please provide exact invocations of tiobench
> >> > that reprduce the problem or some blktrace output for your real test.
> >> 
> >> Any news, Ralf?
> >
> > sorry for the delay. atm there are large backups running and using the
> > raid device for spooling. So I can't do any tests.
> >
> > Re. read ahead: I tested different settings from 8Kb to 65Kb, this
> > didn't help.
> >
> > I'll do some more tests when the backups are done (3-4 more days).
> 
> The default is 128KB, I believe, so it's strange that you would test
> smaller values.  ;)  I would try something along the lines of 1 or 2 MB.

Err, yes this should have been MB not KB.


$cat /sys/block/sdc/queue/read_ahead_kb 
16384
$cat /sys/block/sdd/queue/read_ahead_kb 
16384

I also tried different values for max_sectors_kb, nr_requests. But the
trend that writes were much faster than reads while there was read and
write load on the device didn't change.

Changing the deadline parameter writes_starved, write_expire,
read_expire, front_merges or fifo_batch didn't change this behavoir.

Ralf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ