lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Jul 2009 20:03:37 +0400
From:	Vladislav Bolkhovitin <vst@...b.net>
To:	Ronald Moesbergen <intercommit@...il.com>
CC:	fengguang.wu@...el.com, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org, kosaki.motohiro@...fujitsu.com,
	Alan.Brunelle@...com, linux-fsdevel@...r.kernel.org,
	jens.axboe@...cle.com, randy.dunlap@...cle.com,
	Bart Van Assche <bart.vanassche@...il.com>
Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev


Ronald Moesbergen, on 07/16/2009 06:54 PM wrote:
> 2009/7/16 Vladislav Bolkhovitin <vst@...b.net>:
>> Ronald Moesbergen, on 07/16/2009 11:32 AM wrote:
>>> 2009/7/15 Vladislav Bolkhovitin <vst@...b.net>:
>>>>> The drop with 64 max_sectors_kb on the client is a consequence of how
>>>>> CFQ
>>>>> is working. I can't find the exact code responsible for this, but from
>>>>> all
>>>>> signs, CFQ stops delaying requests if amount of outstanding requests
>>>>> exceeds
>>>>> some threshold, which is 2 or 3. With 64 max_sectors_kb and 5 SCST I/O
>>>>> threads this threshold is exceeded, so CFQ doesn't recover order of
>>>>> requests, hence the performance drop. With default 512 max_sectors_kb
>>>>> and
>>>>> 128K RA the server sees at max 2 requests at time.
>>>>>
>>>>> Ronald, can you perform the same tests with 1 and 2 SCST I/O threads,
>>>>> please?
>>> Ok. Should I still use the file-on-xfs testcase for this, or should I
>>> go back to using a regular block device?
>> Yes, please
> 
> As in: Yes, go back to block device, or Yes use file-on-xfs?

File-on-xfs :)

>>> The file-over-iscsi is quite
>>> uncommon I suppose, most people will export a block device over iscsi,
>>> not a file.
>> No, files are common. The main reason why people use direct block devices is
>> a not supported by anything believe that comparing with files they "have
>> less overhead", so "should be faster". But it isn't true and can be easily
>> checked.
> 
> Well, there are other advantages of using a block device: they are
> generally more manageble, for instance you can use LVM for resizing
> instead of strange dd magic to extend a file. When using a file you
> have to extend the volume that holds the file first, and then the file
> itself.

Files also have advantages. For instance, it's easier to backup them and 
move between servers. On modern systems with fallocate() syscall support 
you don't have to do "strange dd magic" to resize files and can nearly 
instantaneously make them bigger. Also with pretty simple modifications 
scst_vdisk can be improved to make a single virtual device from several 
files.

> And you don't lose disk space to filesystem metadata twice.

This is negligible (0.05% for XFS)

> Also, I still don't get why reads/writes from a blockdevice are
> different in speed than reads/writes from a file on a filesystem.

Me too and I'd appreciate if someone explain it. But I don't want to 
introduce one more variable in the task we are solving (how to make 
100+MB/s from iSCSI on your system).

> I
> for one will not be using files exported over iscsi, but blockdevices
> (LVM volumes).

Are you sure?

> Ronald.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ