[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49bp4c7555.fsf@segfault.boston.devel.redhat.com>
Date: Thu, 23 Dec 2010 12:47:34 -0500
From: Jeff Moyer <jmoyer@...hat.com>
To: Rogier Wolff <R.E.Wolff@...Wizard.nl>
Cc: Greg Freemyer <greg.freemyer@...il.com>,
Bruno Prémont <bonbons@...ux-vserver.org>,
linux-kernel@...r.kernel.org, linux-ide@...r.kernel.org
Subject: Re: Slow disks.
Rogier Wolff <R.E.Wolff@...Wizard.nl> writes:
> On Thu, Dec 23, 2010 at 09:40:54AM -0500, Jeff Moyer wrote:
>> > In my performance calculations, 10ms average seek (should be around
>> > 7), 4ms average rotational latency for a total of 14ms. This would
>> > degrade for read-modify-write to 10+4+8 = 22ms. Still 10 times better
>> > than what we observe: service times on the order of 200-300ms.
>>
>> I didn't say it would account for all of your degradation, just that it
>> could affect performance. I'm sorry if I wasn't clear on that.
>
> We can live with a "2x performance degradation" due to stupid
> configuration. But not with the 10x -30x that we're seeing now.
Wow. I'm not willing to give up any performance due to
misconfiguration!
>> > > md1 : active raid5 sda2[0] sdd2[3](S) sdb2[1] sdc2[4]
>> >> > 39067648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3]
>> >> > [UUU]
>> >>
>> >> A 512KB raid5 chunk with 4KB I/Os? That is a recipe for inefficiency.
>> >> Again, blktrace data would be helpful.
>> >
>> > Where did you get the 4kb IOs from? You mean from the iostat -x
>> > output?
>>
>> Yes, since that's all I have to go on at the moment.
>>
>> > The system/filesystem decided to do those small IOs. With the
>> > throughput we're getting on the filesystem, it better not try to write
>> > larger chuncks...
>>
>> Your logic is a bit flawed, for so many reasons I'm not even going to
>> try to enumerate them here. Anyway, I'll continue to sound like a
>> broken record and ask for blktrace data.
>
> Here it is.
>
> http://prive.bitwizard.nl/blktrace.log
>
> I can't read those yet... Manual is unclear.
OK, I should have made it clear that I wanted the binary logs. No
matter, we'll work with what you've sent.
> My friend confessed to me today that he determined the "optimal" RAID
> block size with the exact same test as I had done, and reached the
> same conclusion. So that explains his raid blocksize of 512k.
>
> The system is a mailserver running on a raid on three of the disks.
> most of the IOs are generated by the mail server software through the
> FS driver, and the raid system. It's not that we're running a database
> that inherently requires 4k IOs. Apparently what the
> system needs are those small IOs.
The log shows a lot of write barriers:
8,32 0 1183 169.033279975 778 A WBS 481958 + 2 <- (8,34) 8
^^^
On pre-2.6.37 kernels, that will fully flush the device queue, which is
why you're seeing such a small queue depth. There was also a CFQ patch
that sped up fsync performance for small files that landed in .37. I
can't remember if you ran with a 2.6.37-rc or not. Have you? It may be
in your best interest to give the latest -rc a try and report back.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists