lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49hby8jbrd.fsf@segfault.boston.devel.redhat.com>
Date:	Mon, 22 Jun 2009 15:42:46 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Ralf Gross <rg@...-Softwaretechnik.com>
Cc:	linux-kernel@...r.kernel.org, fengguang.wu@...el.com
Subject: Re: io-scheduler tuning for better read/write ratio

Ralf Gross <rg@...-Softwaretechnik.com> writes:

> Jeff Moyer schrieb:
>> Jeff Moyer <jmoyer@...hat.com> writes:
>> 
>> > Ralf Gross <rg@...-softwaretechnik.com> writes:
>> >
>> >> Casey Dahlin schrieb:
>> >>> On 06/16/2009 02:40 PM, Ralf Gross wrote:
>> >>> > David Newall schrieb:
>> >>> >> Ralf Gross wrote:
>> >>> >>> write throughput is much higher than the read throughput (40 MB/s
>> >>> >>> read, 90 MB/s write).
>> >>> > 
>> >>> > Hm, but I get higher read throughput (160-200 MB/s) if I don't write
>> >>> > to the device at the same time.
>> >>> > 
>> >>> > Ralf
>> >>> 
>> >>> How specifically are you testing? It could depend a lot on the
>> >>> particular access patterns you're using to test.
>> >>
>> >> I did the basic tests with tiobench. The real test is a test backup
>> >> (bacula) with 2 jobs that create 2 30 GB spool files on that device.
>> >> The jobs partially write to the device in parallel. Depending which
>> >> spool file reaches the 30 GB first, one starts reading from that file
>> >> and writing to tape, while to other is still spooling.
>> >
>> > We are missing a lot of details, here.  I guess the first thing I'd try
>> > would be bumping up the max_readahead_kb parameter, since I'm guessing
>> > that your backup application isn't driving very deep queue depths.  If
>> > that doesn't work, then please provide exact invocations of tiobench
>> > that reprduce the problem or some blktrace output for your real test.
>> 
>> Any news, Ralf?
>
> sorry for the delay. atm there are large backups running and using the
> raid device for spooling. So I can't do any tests.
>
> Re. read ahead: I tested different settings from 8Kb to 65Kb, this
> didn't help.
>
> I'll do some more tests when the backups are done (3-4 more days).

The default is 128KB, I believe, so it's strange that you would test
smaller values.  ;)  I would try something along the lines of 1 or 2 MB.

I'm CCing Fengguang in case he has any suggestions.

Cheers,
Jeff

p.s. Fengguang, the thread starts here:
     http://lkml.org/lkml/2009/6/16/390
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ