lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Jul 2009 08:32:26 +0200
From:	Ronald Moesbergen <intercommit@...il.com>
To:	Vladislav Bolkhovitin <vst@...b.net>
Cc:	fengguang.wu@...el.com, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org, kosaki.motohiro@...fujitsu.com,
	Alan.Brunelle@...com, hifumi.hisashi@....ntt.co.jp,
	linux-fsdevel@...r.kernel.org, jens.axboe@...cle.com,
	randy.dunlap@...cle.com, Bart Van Assche <bart.vanassche@...il.com>
Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev

2009/7/8 Vladislav Bolkhovitin <vst@...b.net>:
> Ronald Moesbergen, on 07/08/2009 12:49 PM wrote:
>>
>> 2009/7/7 Vladislav Bolkhovitin <vst@...b.net>:
>>>
>>> Ronald Moesbergen, on 07/07/2009 10:49 AM wrote:
>>>>>>>
>>>>>>> I think, most likely, there was some confusion between the tested and
>>>>>>> patched versions of the kernel or you forgot to apply the io_context
>>>>>>> patch.
>>>>>>> Please recheck.
>>>>>>
>>>>>> The tests above were definitely done right, I just rechecked the
>>>>>> patches, and I do see an average increase of about 10MB/s over an
>>>>>> unpatched kernel. But overall the performance is still pretty bad.
>>>>>
>>>>> Have you rebuild and reinstall SCST after patching kernel?
>>>>
>>>> Yes I have. And the warning about missing io_context patches wasn't
>>>> there during the compilation.
>>>
>>> Can you update to the latest trunk/ and send me the kernel logs from the
>>> kernel's boot after one dd with any block size you like >128K and the
>>> transfer rate the dd reported, please?
>>>
>>
>> I think I just reproduced the 'wrong' result:
>>
>> dd if=/dev/sdc of=/dev/null bs=512K count=2000
>> 2000+0 records in
>> 2000+0 records out
>> 1048576000 bytes (1.0 GB) copied, 12.1291 s, 86.5 MB/s
>>
>> This happens when I do a 'dd' on the device with a mounted filesystem.
>> The filesystem mount causes some of the blocks on the device to be
>> cached and therefore the results are wrong. This was not the case in
>> all the blockdev-perftest run's I did (the filesystem was never
>> mounted).
>
> Why do you think the file system (which one, BTW?) has any additional
> caching if you did "echo 3 > /proc/sys/vm/drop_caches" before the tests? All
> block devices and file systems use the same cache facilities.

I didn't drop the caches because I just restarted both machines and
thought that would be enough. But because of the mounted filesystem
the results were invalid. (The filesystem is OCFS2, but that doesn't
matter).

> I've also long ago noticed that reading data from block devices is slower
> than from files from mounted on those block devices file systems. Can
> anybody explain it?
>
> Looks like this is strangeness #2 which we uncovered in our tests (the first
> one was earlier in this thread why the context RA doesn't work with
> cooperative I/O threads as good as it should).
>
> Can you rerun the same 11 tests over a file on the file system, please?

I'll see what I can do. Just te be sure: you want me to run
blockdev-perftest on a file on the OCFS2 filesystem which is mounted
on the client over iScsi, right?

Ronald.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists