lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <467E9239.5080702@msgid.tls.msk.ru>
Date:	Sun, 24 Jun 2007 19:48:09 +0400
From:	Michael Tokarev <mjt@....msk.ru>
To:	"Dr. David Alan Gilbert" <linux@...blig.org>
CC:	Jeff Garzik <jeff@...zik.org>, Carlo Wood <carlo@...noe.com>,
	Tejun Heo <htejun@...il.com>,
	Manoj Kasichainula <manoj@...com>,
	linux-kernel@...r.kernel.org,
	IDE/ATA development list <linux-ide@...r.kernel.org>
Subject: Re: SATA RAID5 speed drop of 100 MB/s

Dr. David Alan Gilbert wrote:
> * Michael Tokarev (mjt@....msk.ru) wrote:
> 
> <snip>
> 
>> By the way, I did some testing of various drives, and NCQ/TCQ indeed
>> shows some difference -- with multiple I/O processes (like "server"
>> workload), IF NCQ/TCQ is implemented properly, especially in the
>> drive.
>>
>> For example, this is a good one:
>>
>> Single Seagate 74Gb SCSI drive (10KRPM)
>>
>> BlkSz Trd linRd rndRd linWr  rndWr  linR/W     rndR/W
>> 1024k   1  83.1  36.0  55.8  34.6  28.2/27.6  20.3/19.4
>>         2        45.2        44.1             36.4/ 9.9
>>         4        48.1        47.6             40.7/ 7.1
[]
>> The only thing I don't understand is why with larger I/O block
>> size we see write speed drop with multiple threads.
> 
> My guess is that something is chopping them up into smaller writes.

At least it's not in the kernel.  According to /proc/diskstats,
the requests goes in 1024kb into the drive.

>> And in contrast to the above, here's another test run, now
>> with Seagate SATA ST3250620AS ("desktop" class) 250GB
>> 7200RPM drive:
>>
>> BlkSz Trd linRd rndRd linWr rndWr   linR/W    rndR/W
>> 1024k   1  78.4  34.1  33.5  24.6  19.6/19.5  16.0/12.7
>>         2        33.3        24.6             15.4/13.8
>>         4        34.3        25.0             14.7/15.0
> 
>> And second, so far I haven't seen a case where a drive
>> with NCQ/TCQ enabled works worse than without.  I don't
>> want to say there aren't such drives/controllers, but
>> it just happen that I haven't seen any.)
> 
> Yes you have - the random writes with large blocks and 2 or 4 threads
> is significantly better for your non-NCQ drive; and getting more
> significant as you add more threads - I'm curious what happens
> on 8 threads or more.  

Both drives shown above are with [NT]CQ enabled.  And the first drive
above (74Gb SCSI, where the speed increases with the amount of threads)
is the one which has "better" TCQ implementation.  When I turn off TCQ
for that drive, there's almost no speed increase while increasing number
of threads.

(I can't test this drive now as it's in production.  The results where
gathered before I installed the system on it).

/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ