lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20091116221717.GA8819@duck.suse.cz>
Date:	Mon, 16 Nov 2009 23:17:17 +0100
From:	Jan Kara <jack@...e.cz>
To:	Jeff Moyer <jmoyer@...hat.com>
Cc:	Jan Kara <jack@...e.cz>, jens.axboe@...cle.com,
	LKML <linux-kernel@...r.kernel.org>,
	Chris Mason <chris.mason@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mike Galbraith <efault@....de>, mszeredi@...e.de
Subject: Re: Performance regression in IO scheduler still there

On Mon 16-11-09 12:03:00, Jeff Moyer wrote:
> Jan Kara <jack@...e.cz> writes:
> 
> > On Mon 16-11-09 11:47:44, Jan Kara wrote:
> >> On Thu 12-11-09 15:44:02, Jeff Moyer wrote:
> >> > Jan Kara <jack@...e.cz> writes:
> >> > 
> >> > > On Wed 11-11-09 12:43:30, Jeff Moyer wrote:
> >> > >> Jan Kara <jack@...e.cz> writes:
> >> > >> 
> >> > >> >   Sadly, I don't see the improvement you can see :(. The numbers are the
> >> > >> > same regardless low_latency set to 0:
> >> > >> > 2.6.32-rc5 low_latency = 0:
> >> > >> > 37.39 36.43 36.51 -> 36.776667 0.434920
> >> > >> >   But my testing environment is a plain SATA drive so that probably
> >> > >> > explains the difference...
> >> > >> 
> >> > >> I just retested (10 runs for each kernel) on a SATA disk with no NCQ
> >> > >> support and I could not see a difference.  I'll try to dig up a disk
> >> > >> that support NCQ.  Is that what you're using for testing?
> >> > >   I don't think I am. How do I find out?
> >> > 
> >> > Good question.  ;-)  I grep for NCQ in dmesg output and make sure it's
> >> > greater than 0/32.  There may be a better way, though.
> >>   Message in the logs:
> >> ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
> >> ata1.00: ATA-8: Hitachi HTS722016K9SA00, DCDOC54P, max UDMA/133
> >> ata1.00: 312581808 sectors, multi 16: LBA48 NCQ (depth 0/32)
> >> ata1.00: configured for UDMA/133
> >>   So apparently no NCQ. /sys/block/sda/device/queue_depth shows 1 but I
> >> guess that's just it's way of saying "no NCQ".
> >> 
> >>   What I thought might make a difference why I'm seeing the drop and you
> >> are not is size of RAM or number of CPUs vs the tiobench file size or
> >> number of threads. I'm running on a machine with 2 GB of RAM, using 4 GB
> >> filesize. The machine has 2 cores and I'm using 16 tiobench threads. I'm
> >> now rerunning tests with various numbers of threads to see how big
> >> difference it makes.
> >   OK, here are the numbers (3 runs of each test):
> > 2.6.29:
> > Threads	Avg		Stddev
> > 1	42.043333	0.860439
> > 2	40.836667	0.322938
> > 4	41.810000	0.114310
> > 8	40.190000	0.419603
> > 16	39.950000	0.403072
> > 32	39.373333	0.766913
> >
> > 2.6.32-rc7:
> > Threads	Avg		Stddev
> > 1	41.580000	0.403072
> > 2	39.163333	0.374641
> > 4	39.483333	0.400111
> > 8	38.560000	0.106145
> > 16	37.966667	0.098770
> > 32	36.476667	0.032998
> >
> >   So apparently the difference between 2.6.29 and 2.6.32-rc7 increases as
> > the number of threads rises. With how many threads have you been running
> > when using SATA drive and what machine is it?
> >   I'm now running a test with larger file size (8GB instead of 4) to see
> > what difference it makes.
> 
> I've been running with both 8 and 16 threads.  The machine has 4 CPUs
> and 4GB of RAM.  I've been testing with an 8GB file size.
  OK, I see a similar regression also with 8GB file size:
2.6.29:
1 41.556667 0.787415
2 40.866667 0.714112
4 40.726667 0.228376
8 38.596667 0.344706
16 39.076667 0.180801
32 37.743333 0.147271

2.6.32-rc7:
1 41.860000 0.063770
2 39.196667 0.012472
4 39.426667 0.162138
8 37.550000 0.040825
16 37.710000 0.096264
32 35.680000 0.109848

  BTW: I'm running the test always on a fresh ext3 in data=ordered mode
with barrier=1.

									Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ