lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49iqdaz9kr.fsf@segfault.boston.devel.redhat.com>
Date:	Mon, 16 Nov 2009 12:03:00 -0500
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Jan Kara <jack@...e.cz>
Cc:	jens.axboe@...cle.com, LKML <linux-kernel@...r.kernel.org>,
	Chris Mason <chris.mason@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mike Galbraith <efault@....de>, mszeredi@...e.de
Subject: Re: Performance regression in IO scheduler still there

Jan Kara <jack@...e.cz> writes:

> On Mon 16-11-09 11:47:44, Jan Kara wrote:
>> On Thu 12-11-09 15:44:02, Jeff Moyer wrote:
>> > Jan Kara <jack@...e.cz> writes:
>> > 
>> > > On Wed 11-11-09 12:43:30, Jeff Moyer wrote:
>> > >> Jan Kara <jack@...e.cz> writes:
>> > >> 
>> > >> >   Sadly, I don't see the improvement you can see :(. The numbers are the
>> > >> > same regardless low_latency set to 0:
>> > >> > 2.6.32-rc5 low_latency = 0:
>> > >> > 37.39 36.43 36.51 -> 36.776667 0.434920
>> > >> >   But my testing environment is a plain SATA drive so that probably
>> > >> > explains the difference...
>> > >> 
>> > >> I just retested (10 runs for each kernel) on a SATA disk with no NCQ
>> > >> support and I could not see a difference.  I'll try to dig up a disk
>> > >> that support NCQ.  Is that what you're using for testing?
>> > >   I don't think I am. How do I find out?
>> > 
>> > Good question.  ;-)  I grep for NCQ in dmesg output and make sure it's
>> > greater than 0/32.  There may be a better way, though.
>>   Message in the logs:
>> ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
>> ata1.00: ATA-8: Hitachi HTS722016K9SA00, DCDOC54P, max UDMA/133
>> ata1.00: 312581808 sectors, multi 16: LBA48 NCQ (depth 0/32)
>> ata1.00: configured for UDMA/133
>>   So apparently no NCQ. /sys/block/sda/device/queue_depth shows 1 but I
>> guess that's just it's way of saying "no NCQ".
>> 
>>   What I thought might make a difference why I'm seeing the drop and you
>> are not is size of RAM or number of CPUs vs the tiobench file size or
>> number of threads. I'm running on a machine with 2 GB of RAM, using 4 GB
>> filesize. The machine has 2 cores and I'm using 16 tiobench threads. I'm
>> now rerunning tests with various numbers of threads to see how big
>> difference it makes.
>   OK, here are the numbers (3 runs of each test):
> 2.6.29:
> Threads	Avg		Stddev
> 1	42.043333	0.860439
> 2	40.836667	0.322938
> 4	41.810000	0.114310
> 8	40.190000	0.419603
> 16	39.950000	0.403072
> 32	39.373333	0.766913
>
> 2.6.32-rc7:
> Threads	Avg		Stddev
> 1	41.580000	0.403072
> 2	39.163333	0.374641
> 4	39.483333	0.400111
> 8	38.560000	0.106145
> 16	37.966667	0.098770
> 32	36.476667	0.032998
>
>   So apparently the difference between 2.6.29 and 2.6.32-rc7 increases as
> the number of threads rises. With how many threads have you been running
> when using SATA drive and what machine is it?
>   I'm now running a test with larger file size (8GB instead of 4) to see
> what difference it makes.

I've been running with both 8 and 16 threads.  The machine has 4 CPUs
and 4GB of RAM.  I've been testing with an 8GB file size.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ