lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 13 Feb 2009 23:08:25 +0300
From:	Vladislav Bolkhovitin <vst@...b.net>
To:	Wu Fengguang <wfg@...ux.intel.com>
CC:	Jens Axboe <jens.axboe@...cle.com>, Jeff Moyer <jmoyer@...hat.com>,
	"Vitaly V. Bursov" <vitalyb@...enet.dn.ua>,
	linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases

Wu Fengguang, on 02/13/2009 04:57 AM wrote:
> On Thu, Feb 12, 2009 at 09:35:18PM +0300, Vladislav Bolkhovitin wrote:
>> Sorry for such a huge delay. There were many other activities I had to  
>> do before + I had to be sure I didn't miss anything.
>>
>> We didn't use NFS, we used SCST (http://scst.sourceforge.net) with  
>> iSCSI-SCST target driver. It has similar to NFS architecture, where N  
>> threads (N=5 in this case) handle IO from remote initiators (clients)  
>> coming from wire using iSCSI protocol. In addition, SCST has patch  
>> called export_alloc_io_context (see  
>> http://lkml.org/lkml/2008/12/10/282), which allows for the IO threads  
>> queue IO using single IO context, so we can see if context RA can  
>> replace grouping IO threads in single IO context.
>>
>> Unfortunately, the results are negative. We find neither any advantages  
>> of context RA over current RA implementation, nor possibility for  
>> context RA to replace grouping IO threads in single IO context.
>>
>> Setup on the target (server) was the following. 2 SATA drives grouped in  
>> md RAID-0 with average local read throughput ~120MB/s ("dd if=/dev/zero  
>> of=/dev/md0 bs=1M count=20000" outputs "20971520000 bytes (21 GB)  
>> copied, 177,742 s, 118 MB/s"). The md device was partitioned on 3  
>> partitions. The first partition was 10% of space in the beginning of the  
>> device, the last partition was 10% of space in the end of the device,  
>> the middle one was the rest in the middle of the space them. Then the  
>> first and the last partitions were exported to the initiator (client).  
>> They were /dev/sdb and /dev/sdc on it correspondingly.
> 
> Vladislav, Thank you for the benchmarks! I'm very interested in
> optimizing your workload and figuring out what happens underneath.
> 
> Are the client and server two standalone boxes connected by GBE?

Yes, they directly connected using GbE.

> When you set readahead sizes in the benchmarks, you are setting them
> in the server side? I.e. "linux-4dtq" is the SCST server?

Yes, it's the server. On the client all the parameters were left default.

> What's the
> client side readahead size?

Default, i.e. 128K

> It would help a lot to debug readahead if you can provide the
> server side readahead stats and trace log for the worst case.
> This will automatically answer the above questions as well as disclose
> the micro-behavior of readahead:
> 
>         mount -t debugfs none /sys/kernel/debug
> 
>         echo > /sys/kernel/debug/readahead/stats # reset counters
>         # do benchmark
>         cat /sys/kernel/debug/readahead/stats
> 
>         echo 1 > /sys/kernel/debug/readahead/trace_enable
>         # do micro-benchmark, i.e. run the same benchmark for a short time
>         echo 0 > /sys/kernel/debug/readahead/trace_enable
>         dmesg
> 
> The above readahead trace should help find out how the client side
> sequential reads convert into server side random reads, and how we can
> prevent that.

We will do it as soon as we have a free window on that system.

Thanks,
Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ